00:00:00.001 Started by upstream project "autotest-per-patch" build number 132118 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.187 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.188 The recommended git tool is: git 00:00:00.189 using credential 00000000-0000-0000-0000-000000000002 00:00:00.190 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.236 Fetching changes from the remote Git repository 00:00:00.237 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.286 Using shallow fetch with depth 1 00:00:00.286 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.286 > git --version # timeout=10 00:00:00.328 > git --version # 'git version 2.39.2' 00:00:00.328 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.362 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.362 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.705 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.718 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.731 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:04.731 > git config core.sparsecheckout # timeout=10 00:00:04.743 > git read-tree -mu HEAD # timeout=10 00:00:04.760 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:04.778 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:04.779 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:04.872 [Pipeline] Start of Pipeline 00:00:04.886 [Pipeline] library 00:00:04.887 Loading library shm_lib@master 00:00:04.887 Library shm_lib@master is cached. Copying from home. 00:00:04.906 [Pipeline] node 00:00:04.914 Running on CYP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.916 [Pipeline] { 00:00:04.927 [Pipeline] catchError 00:00:04.929 [Pipeline] { 00:00:04.943 [Pipeline] wrap 00:00:04.952 [Pipeline] { 00:00:04.960 [Pipeline] stage 00:00:04.962 [Pipeline] { (Prologue) 00:00:05.193 [Pipeline] sh 00:00:05.481 + logger -p user.info -t JENKINS-CI 00:00:05.500 [Pipeline] echo 00:00:05.502 Node: CYP11 00:00:05.521 [Pipeline] sh 00:00:05.907 [Pipeline] setCustomBuildProperty 00:00:05.919 [Pipeline] echo 00:00:05.921 Cleanup processes 00:00:05.929 [Pipeline] sh 00:00:06.214 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.214 516864 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.228 [Pipeline] sh 00:00:06.512 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.512 ++ grep -v 'sudo pgrep' 00:00:06.512 ++ awk '{print $1}' 00:00:06.512 + sudo kill -9 00:00:06.512 + true 00:00:06.527 [Pipeline] cleanWs 00:00:06.536 [WS-CLEANUP] Deleting project workspace... 00:00:06.536 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.543 [WS-CLEANUP] done 00:00:06.547 [Pipeline] setCustomBuildProperty 00:00:06.561 [Pipeline] sh 00:00:06.881 + sudo git config --global --replace-all safe.directory '*' 00:00:06.967 [Pipeline] httpRequest 00:00:07.362 [Pipeline] echo 00:00:07.364 Sorcerer 10.211.164.101 is alive 00:00:07.370 [Pipeline] retry 00:00:07.372 [Pipeline] { 00:00:07.384 [Pipeline] httpRequest 00:00:07.388 HttpMethod: GET 00:00:07.389 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.390 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.393 Response Code: HTTP/1.1 200 OK 00:00:07.393 Success: Status code 200 is in the accepted range: 200,404 00:00:07.393 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.824 [Pipeline] } 00:00:07.841 [Pipeline] // retry 00:00:07.848 [Pipeline] sh 00:00:08.131 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.148 [Pipeline] httpRequest 00:00:08.555 [Pipeline] echo 00:00:08.556 Sorcerer 10.211.164.101 is alive 00:00:08.566 [Pipeline] retry 00:00:08.568 [Pipeline] { 00:00:08.581 [Pipeline] httpRequest 00:00:08.586 HttpMethod: GET 00:00:08.587 URL: http://10.211.164.101/packages/spdk_b7ef84b3d354b6edb67752f7919c071f5334bc2a.tar.gz 00:00:08.587 Sending request to url: http://10.211.164.101/packages/spdk_b7ef84b3d354b6edb67752f7919c071f5334bc2a.tar.gz 00:00:08.590 Response Code: HTTP/1.1 200 OK 00:00:08.590 Success: Status code 200 is in the accepted range: 200,404 00:00:08.591 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_b7ef84b3d354b6edb67752f7919c071f5334bc2a.tar.gz 00:00:28.838 [Pipeline] } 00:00:28.856 [Pipeline] // retry 00:00:28.863 [Pipeline] sh 00:00:29.147 + tar --no-same-owner -xf spdk_b7ef84b3d354b6edb67752f7919c071f5334bc2a.tar.gz 00:00:31.697 [Pipeline] sh 00:00:31.980 + git -C spdk log --oneline -n5 00:00:31.980 b7ef84b3d bdev: Insert metadata using bounce/accel buffer if I/O is not aware of metadata 00:00:31.980 079966333 accel: Fix a bug that append_dif_generate_copy() did not set dif_ctx 00:00:31.980 159fecd99 accel: Fix comments for spdk_accel_*_dif_verify_copy() 00:00:31.980 6a3a0b5fb bdev: Clean up duplicated asserts in bdev_io_pull_data() 00:00:31.980 32c6c4b3a bdev: Rename _bdev_memory_domain_io_get_buf() by bdev_io_get_bounce_buf() 00:00:31.993 [Pipeline] } 00:00:32.010 [Pipeline] // stage 00:00:32.020 [Pipeline] stage 00:00:32.022 [Pipeline] { (Prepare) 00:00:32.040 [Pipeline] writeFile 00:00:32.057 [Pipeline] sh 00:00:32.342 + logger -p user.info -t JENKINS-CI 00:00:32.356 [Pipeline] sh 00:00:32.640 + logger -p user.info -t JENKINS-CI 00:00:32.652 [Pipeline] sh 00:00:32.936 + cat autorun-spdk.conf 00:00:32.936 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.936 SPDK_TEST_NVMF=1 00:00:32.936 SPDK_TEST_NVME_CLI=1 00:00:32.936 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:32.936 SPDK_TEST_NVMF_NICS=e810 00:00:32.936 SPDK_TEST_VFIOUSER=1 00:00:32.936 SPDK_RUN_UBSAN=1 00:00:32.936 NET_TYPE=phy 00:00:32.944 RUN_NIGHTLY=0 00:00:32.948 [Pipeline] readFile 00:00:32.973 [Pipeline] withEnv 00:00:32.975 [Pipeline] { 00:00:32.986 [Pipeline] sh 00:00:33.270 + set -ex 00:00:33.270 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:33.270 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:33.270 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:33.270 ++ SPDK_TEST_NVMF=1 00:00:33.270 ++ SPDK_TEST_NVME_CLI=1 00:00:33.270 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:33.270 ++ SPDK_TEST_NVMF_NICS=e810 00:00:33.270 ++ SPDK_TEST_VFIOUSER=1 00:00:33.270 ++ SPDK_RUN_UBSAN=1 00:00:33.270 ++ NET_TYPE=phy 00:00:33.270 ++ RUN_NIGHTLY=0 00:00:33.270 + case $SPDK_TEST_NVMF_NICS in 00:00:33.270 + DRIVERS=ice 00:00:33.270 + [[ tcp == \r\d\m\a ]] 00:00:33.270 + [[ -n ice ]] 00:00:33.270 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:33.270 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:41.399 rmmod: ERROR: Module irdma is not currently loaded 00:00:41.399 rmmod: ERROR: Module i40iw is not currently loaded 00:00:41.399 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:41.399 + true 00:00:41.399 + for D in $DRIVERS 00:00:41.399 + sudo modprobe ice 00:00:41.399 + exit 0 00:00:41.408 [Pipeline] } 00:00:41.419 [Pipeline] // withEnv 00:00:41.423 [Pipeline] } 00:00:41.434 [Pipeline] // stage 00:00:41.442 [Pipeline] catchError 00:00:41.444 [Pipeline] { 00:00:41.455 [Pipeline] timeout 00:00:41.455 Timeout set to expire in 1 hr 0 min 00:00:41.457 [Pipeline] { 00:00:41.468 [Pipeline] stage 00:00:41.469 [Pipeline] { (Tests) 00:00:41.481 [Pipeline] sh 00:00:41.764 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:41.764 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:41.764 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:41.764 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:41.764 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:41.764 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:41.764 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:41.764 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:41.764 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:41.764 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:41.764 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:41.764 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:41.764 + source /etc/os-release 00:00:41.764 ++ NAME='Fedora Linux' 00:00:41.764 ++ VERSION='39 (Cloud Edition)' 00:00:41.764 ++ ID=fedora 00:00:41.764 ++ VERSION_ID=39 00:00:41.764 ++ VERSION_CODENAME= 00:00:41.764 ++ PLATFORM_ID=platform:f39 00:00:41.764 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:41.764 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:41.764 ++ LOGO=fedora-logo-icon 00:00:41.764 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:41.764 ++ HOME_URL=https://fedoraproject.org/ 00:00:41.764 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:41.764 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:41.764 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:41.764 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:41.764 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:41.764 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:41.764 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:41.764 ++ SUPPORT_END=2024-11-12 00:00:41.764 ++ VARIANT='Cloud Edition' 00:00:41.764 ++ VARIANT_ID=cloud 00:00:41.764 + uname -a 00:00:41.764 Linux spdk-cyp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:41.764 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:44.305 Hugepages 00:00:44.305 node hugesize free / total 00:00:44.305 node0 1048576kB 0 / 0 00:00:44.305 node0 2048kB 0 / 0 00:00:44.305 node1 1048576kB 0 / 0 00:00:44.305 node1 2048kB 0 / 0 00:00:44.305 00:00:44.305 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:44.305 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:00:44.305 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:00:44.305 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:00:44.305 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:00:44.305 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:00:44.305 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:00:44.305 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:00:44.305 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:00:44.305 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:00:44.305 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:00:44.305 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:00:44.305 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:00:44.305 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:00:44.305 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:00:44.305 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:00:44.305 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:00:44.305 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:00:44.305 + rm -f /tmp/spdk-ld-path 00:00:44.305 + source autorun-spdk.conf 00:00:44.305 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:44.305 ++ SPDK_TEST_NVMF=1 00:00:44.305 ++ SPDK_TEST_NVME_CLI=1 00:00:44.305 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:44.305 ++ SPDK_TEST_NVMF_NICS=e810 00:00:44.305 ++ SPDK_TEST_VFIOUSER=1 00:00:44.305 ++ SPDK_RUN_UBSAN=1 00:00:44.305 ++ NET_TYPE=phy 00:00:44.305 ++ RUN_NIGHTLY=0 00:00:44.305 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:44.305 + [[ -n '' ]] 00:00:44.305 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:44.305 + for M in /var/spdk/build-*-manifest.txt 00:00:44.305 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:00:44.305 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:44.305 + for M in /var/spdk/build-*-manifest.txt 00:00:44.305 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:44.305 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:44.305 + for M in /var/spdk/build-*-manifest.txt 00:00:44.305 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:44.305 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:44.305 ++ uname 00:00:44.305 + [[ Linux == \L\i\n\u\x ]] 00:00:44.305 + sudo dmesg -T 00:00:44.305 + sudo dmesg --clear 00:00:44.305 + dmesg_pid=517975 00:00:44.305 + [[ Fedora Linux == FreeBSD ]] 00:00:44.305 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:44.305 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:44.305 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:44.305 + [[ -x /usr/src/fio-static/fio ]] 00:00:44.305 + export FIO_BIN=/usr/src/fio-static/fio 00:00:44.305 + FIO_BIN=/usr/src/fio-static/fio 00:00:44.305 + sudo dmesg -Tw 00:00:44.305 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:44.305 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:44.305 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:44.305 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:44.305 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:44.305 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:44.305 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:44.305 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:44.305 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:44.305 13:44:23 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:00:44.305 13:44:23 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:44.305 13:44:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:44.305 13:44:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:00:44.305 13:44:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:00:44.305 13:44:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:44.306 13:44:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:00:44.306 13:44:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:00:44.306 13:44:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:00:44.306 13:44:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:00:44.306 13:44:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:00:44.306 13:44:23 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:00:44.306 13:44:23 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:44.306 13:44:23 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:00:44.306 13:44:23 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:44.306 13:44:23 -- scripts/common.sh@15 -- $ shopt -s extglob 00:00:44.306 13:44:23 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:44.306 13:44:23 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:44.306 13:44:23 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:44.306 13:44:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:44.306 13:44:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:44.306 13:44:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:44.306 13:44:23 -- paths/export.sh@5 -- $ export PATH 00:00:44.306 13:44:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:44.306 13:44:23 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:44.306 13:44:23 -- common/autobuild_common.sh@486 -- $ date +%s 00:00:44.306 13:44:23 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730897063.XXXXXX 00:00:44.306 13:44:23 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730897063.eAODYK 00:00:44.306 13:44:23 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:00:44.306 13:44:23 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:00:44.306 13:44:23 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:44.306 13:44:23 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:44.306 13:44:23 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:44.306 13:44:23 -- common/autobuild_common.sh@502 -- $ get_config_params 00:00:44.306 13:44:23 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:00:44.306 13:44:23 -- common/autotest_common.sh@10 -- $ set +x 00:00:44.306 13:44:23 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:44.306 13:44:23 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:00:44.306 13:44:23 -- pm/common@17 -- $ local monitor 00:00:44.306 13:44:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:44.306 13:44:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:44.306 13:44:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:44.306 13:44:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:44.306 13:44:23 -- pm/common@25 -- $ sleep 1 00:00:44.306 13:44:23 -- pm/common@21 -- $ date +%s 00:00:44.306 13:44:23 -- pm/common@21 -- $ date +%s 00:00:44.306 13:44:23 -- pm/common@21 -- $ date +%s 00:00:44.306 13:44:23 -- pm/common@21 -- $ date +%s 00:00:44.306 13:44:23 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730897063 00:00:44.306 13:44:23 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730897063 00:00:44.306 13:44:23 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730897063 00:00:44.306 13:44:23 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730897063 00:00:44.306 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730897063_collect-cpu-load.pm.log 00:00:44.306 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730897063_collect-vmstat.pm.log 00:00:44.306 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730897063_collect-cpu-temp.pm.log 00:00:44.306 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730897063_collect-bmc-pm.bmc.pm.log 00:00:45.245 13:44:24 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:00:45.245 13:44:24 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:45.245 13:44:24 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:45.246 13:44:24 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:45.246 13:44:24 -- spdk/autobuild.sh@16 -- $ date -u 00:00:45.246 Wed Nov 6 12:44:24 PM UTC 2024 00:00:45.246 13:44:24 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:45.246 v25.01-pre-187-gb7ef84b3d 00:00:45.246 13:44:24 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:45.246 13:44:24 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:45.246 13:44:24 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:45.246 13:44:24 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:00:45.246 13:44:24 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:00:45.246 13:44:24 -- common/autotest_common.sh@10 -- $ set +x 00:00:45.246 ************************************ 00:00:45.246 START TEST ubsan 00:00:45.246 ************************************ 00:00:45.246 13:44:24 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:00:45.246 using ubsan 00:00:45.246 00:00:45.246 real 0m0.000s 00:00:45.246 user 0m0.000s 00:00:45.246 sys 0m0.000s 00:00:45.246 13:44:24 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:00:45.246 13:44:24 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:45.246 ************************************ 00:00:45.246 END TEST ubsan 00:00:45.246 ************************************ 00:00:45.246 13:44:24 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:45.246 13:44:24 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:45.246 13:44:24 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:45.246 13:44:24 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:45.246 13:44:24 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:45.246 13:44:24 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:45.246 13:44:24 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:45.246 13:44:24 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:45.246 13:44:24 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:45.246 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:45.246 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:45.505 Using 'verbs' RDMA provider 00:00:56.060 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:06.050 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:06.050 Creating mk/config.mk...done. 00:01:06.050 Creating mk/cc.flags.mk...done. 00:01:06.050 Type 'make' to build. 00:01:06.050 13:44:44 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:01:06.050 13:44:44 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:06.050 13:44:44 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:06.050 13:44:44 -- common/autotest_common.sh@10 -- $ set +x 00:01:06.050 ************************************ 00:01:06.050 START TEST make 00:01:06.050 ************************************ 00:01:06.050 13:44:44 make -- common/autotest_common.sh@1127 -- $ make -j144 00:01:06.050 make[1]: Nothing to be done for 'all'. 00:01:06.991 The Meson build system 00:01:06.991 Version: 1.5.0 00:01:06.992 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:06.992 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:06.992 Build type: native build 00:01:06.992 Project name: libvfio-user 00:01:06.992 Project version: 0.0.1 00:01:06.992 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:06.992 C linker for the host machine: cc ld.bfd 2.40-14 00:01:06.992 Host machine cpu family: x86_64 00:01:06.992 Host machine cpu: x86_64 00:01:06.992 Run-time dependency threads found: YES 00:01:06.992 Library dl found: YES 00:01:06.992 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:06.992 Run-time dependency json-c found: YES 0.17 00:01:06.992 Run-time dependency cmocka found: YES 1.1.7 00:01:06.992 Program pytest-3 found: NO 00:01:06.992 Program flake8 found: NO 00:01:06.992 Program misspell-fixer found: NO 00:01:06.992 Program restructuredtext-lint found: NO 00:01:06.992 Program valgrind found: YES (/usr/bin/valgrind) 00:01:06.992 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:06.992 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:06.992 Compiler for C supports arguments -Wwrite-strings: YES 00:01:06.992 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:06.992 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:06.992 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:06.992 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:06.992 Build targets in project: 8 00:01:06.992 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:06.992 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:06.992 00:01:06.992 libvfio-user 0.0.1 00:01:06.992 00:01:06.992 User defined options 00:01:06.992 buildtype : debug 00:01:06.992 default_library: shared 00:01:06.992 libdir : /usr/local/lib 00:01:06.992 00:01:06.992 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:07.562 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:07.562 [1/37] Compiling C object samples/null.p/null.c.o 00:01:07.562 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:07.562 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:07.562 [4/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:07.562 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:07.562 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:07.562 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:07.562 [8/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:07.562 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:07.562 [10/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:07.562 [11/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:07.562 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:07.562 [13/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:07.562 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:07.562 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:07.562 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:07.562 [17/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:07.562 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:07.562 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:07.562 [20/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:07.562 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:07.562 [22/37] Compiling C object samples/server.p/server.c.o 00:01:07.562 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:07.562 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:07.562 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:07.562 [26/37] Compiling C object samples/client.p/client.c.o 00:01:07.562 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:07.562 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:07.562 [29/37] Linking target samples/client 00:01:07.562 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:07.562 [31/37] Linking target test/unit_tests 00:01:07.822 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:07.822 [33/37] Linking target samples/server 00:01:07.822 [34/37] Linking target samples/null 00:01:07.822 [35/37] Linking target samples/gpio-pci-idio-16 00:01:07.822 [36/37] Linking target samples/lspci 00:01:07.822 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:07.822 INFO: autodetecting backend as ninja 00:01:07.822 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:07.822 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:08.081 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:08.081 ninja: no work to do. 00:01:11.371 The Meson build system 00:01:11.371 Version: 1.5.0 00:01:11.371 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:11.371 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:11.371 Build type: native build 00:01:11.371 Program cat found: YES (/usr/bin/cat) 00:01:11.371 Project name: DPDK 00:01:11.371 Project version: 24.03.0 00:01:11.371 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:11.371 C linker for the host machine: cc ld.bfd 2.40-14 00:01:11.371 Host machine cpu family: x86_64 00:01:11.371 Host machine cpu: x86_64 00:01:11.371 Message: ## Building in Developer Mode ## 00:01:11.371 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:11.371 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:11.371 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:11.371 Program python3 found: YES (/usr/bin/python3) 00:01:11.371 Program cat found: YES (/usr/bin/cat) 00:01:11.371 Compiler for C supports arguments -march=native: YES 00:01:11.371 Checking for size of "void *" : 8 00:01:11.372 Checking for size of "void *" : 8 (cached) 00:01:11.372 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:11.372 Library m found: YES 00:01:11.372 Library numa found: YES 00:01:11.372 Has header "numaif.h" : YES 00:01:11.372 Library fdt found: NO 00:01:11.372 Library execinfo found: NO 00:01:11.372 Has header "execinfo.h" : YES 00:01:11.372 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:11.372 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:11.372 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:11.372 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:11.372 Run-time dependency openssl found: YES 3.1.1 00:01:11.372 Run-time dependency libpcap found: YES 1.10.4 00:01:11.372 Has header "pcap.h" with dependency libpcap: YES 00:01:11.372 Compiler for C supports arguments -Wcast-qual: YES 00:01:11.372 Compiler for C supports arguments -Wdeprecated: YES 00:01:11.372 Compiler for C supports arguments -Wformat: YES 00:01:11.372 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:11.372 Compiler for C supports arguments -Wformat-security: NO 00:01:11.372 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:11.372 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:11.372 Compiler for C supports arguments -Wnested-externs: YES 00:01:11.372 Compiler for C supports arguments -Wold-style-definition: YES 00:01:11.372 Compiler for C supports arguments -Wpointer-arith: YES 00:01:11.372 Compiler for C supports arguments -Wsign-compare: YES 00:01:11.372 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:11.372 Compiler for C supports arguments -Wundef: YES 00:01:11.372 Compiler for C supports arguments -Wwrite-strings: YES 00:01:11.372 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:11.372 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:11.372 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:11.372 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:11.372 Program objdump found: YES (/usr/bin/objdump) 00:01:11.372 Compiler for C supports arguments -mavx512f: YES 00:01:11.372 Checking if "AVX512 checking" compiles: YES 00:01:11.372 Fetching value of define "__SSE4_2__" : 1 00:01:11.372 Fetching value of define "__AES__" : 1 00:01:11.372 Fetching value of define "__AVX__" : 1 00:01:11.372 Fetching value of define "__AVX2__" : 1 00:01:11.372 Fetching value of define "__AVX512BW__" : 1 00:01:11.372 Fetching value of define "__AVX512CD__" : 1 00:01:11.372 Fetching value of define "__AVX512DQ__" : 1 00:01:11.372 Fetching value of define "__AVX512F__" : 1 00:01:11.372 Fetching value of define "__AVX512VL__" : 1 00:01:11.372 Fetching value of define "__PCLMUL__" : 1 00:01:11.372 Fetching value of define "__RDRND__" : 1 00:01:11.372 Fetching value of define "__RDSEED__" : 1 00:01:11.372 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:11.372 Fetching value of define "__znver1__" : (undefined) 00:01:11.372 Fetching value of define "__znver2__" : (undefined) 00:01:11.372 Fetching value of define "__znver3__" : (undefined) 00:01:11.372 Fetching value of define "__znver4__" : (undefined) 00:01:11.372 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:11.372 Message: lib/log: Defining dependency "log" 00:01:11.372 Message: lib/kvargs: Defining dependency "kvargs" 00:01:11.372 Message: lib/telemetry: Defining dependency "telemetry" 00:01:11.372 Checking for function "getentropy" : NO 00:01:11.372 Message: lib/eal: Defining dependency "eal" 00:01:11.372 Message: lib/ring: Defining dependency "ring" 00:01:11.372 Message: lib/rcu: Defining dependency "rcu" 00:01:11.372 Message: lib/mempool: Defining dependency "mempool" 00:01:11.372 Message: lib/mbuf: Defining dependency "mbuf" 00:01:11.372 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:11.372 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:11.372 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:11.372 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:11.372 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:11.372 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:11.372 Compiler for C supports arguments -mpclmul: YES 00:01:11.372 Compiler for C supports arguments -maes: YES 00:01:11.372 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:11.372 Compiler for C supports arguments -mavx512bw: YES 00:01:11.372 Compiler for C supports arguments -mavx512dq: YES 00:01:11.372 Compiler for C supports arguments -mavx512vl: YES 00:01:11.372 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:11.372 Compiler for C supports arguments -mavx2: YES 00:01:11.372 Compiler for C supports arguments -mavx: YES 00:01:11.372 Message: lib/net: Defining dependency "net" 00:01:11.372 Message: lib/meter: Defining dependency "meter" 00:01:11.372 Message: lib/ethdev: Defining dependency "ethdev" 00:01:11.372 Message: lib/pci: Defining dependency "pci" 00:01:11.372 Message: lib/cmdline: Defining dependency "cmdline" 00:01:11.372 Message: lib/hash: Defining dependency "hash" 00:01:11.372 Message: lib/timer: Defining dependency "timer" 00:01:11.372 Message: lib/compressdev: Defining dependency "compressdev" 00:01:11.372 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:11.372 Message: lib/dmadev: Defining dependency "dmadev" 00:01:11.372 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:11.372 Message: lib/power: Defining dependency "power" 00:01:11.372 Message: lib/reorder: Defining dependency "reorder" 00:01:11.372 Message: lib/security: Defining dependency "security" 00:01:11.372 Has header "linux/userfaultfd.h" : YES 00:01:11.372 Has header "linux/vduse.h" : YES 00:01:11.372 Message: lib/vhost: Defining dependency "vhost" 00:01:11.372 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:11.372 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:11.372 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:11.372 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:11.372 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:11.372 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:11.372 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:11.372 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:11.372 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:11.372 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:11.372 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:11.372 Configuring doxy-api-html.conf using configuration 00:01:11.372 Configuring doxy-api-man.conf using configuration 00:01:11.372 Program mandb found: YES (/usr/bin/mandb) 00:01:11.372 Program sphinx-build found: NO 00:01:11.372 Configuring rte_build_config.h using configuration 00:01:11.372 Message: 00:01:11.372 ================= 00:01:11.372 Applications Enabled 00:01:11.372 ================= 00:01:11.372 00:01:11.372 apps: 00:01:11.372 00:01:11.372 00:01:11.372 Message: 00:01:11.372 ================= 00:01:11.372 Libraries Enabled 00:01:11.372 ================= 00:01:11.372 00:01:11.372 libs: 00:01:11.372 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:11.372 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:11.372 cryptodev, dmadev, power, reorder, security, vhost, 00:01:11.372 00:01:11.372 Message: 00:01:11.372 =============== 00:01:11.372 Drivers Enabled 00:01:11.372 =============== 00:01:11.372 00:01:11.372 common: 00:01:11.372 00:01:11.372 bus: 00:01:11.372 pci, vdev, 00:01:11.372 mempool: 00:01:11.372 ring, 00:01:11.372 dma: 00:01:11.372 00:01:11.372 net: 00:01:11.372 00:01:11.372 crypto: 00:01:11.372 00:01:11.372 compress: 00:01:11.372 00:01:11.372 vdpa: 00:01:11.372 00:01:11.372 00:01:11.372 Message: 00:01:11.372 ================= 00:01:11.372 Content Skipped 00:01:11.372 ================= 00:01:11.372 00:01:11.372 apps: 00:01:11.372 dumpcap: explicitly disabled via build config 00:01:11.372 graph: explicitly disabled via build config 00:01:11.372 pdump: explicitly disabled via build config 00:01:11.372 proc-info: explicitly disabled via build config 00:01:11.372 test-acl: explicitly disabled via build config 00:01:11.372 test-bbdev: explicitly disabled via build config 00:01:11.372 test-cmdline: explicitly disabled via build config 00:01:11.372 test-compress-perf: explicitly disabled via build config 00:01:11.372 test-crypto-perf: explicitly disabled via build config 00:01:11.372 test-dma-perf: explicitly disabled via build config 00:01:11.372 test-eventdev: explicitly disabled via build config 00:01:11.372 test-fib: explicitly disabled via build config 00:01:11.372 test-flow-perf: explicitly disabled via build config 00:01:11.372 test-gpudev: explicitly disabled via build config 00:01:11.372 test-mldev: explicitly disabled via build config 00:01:11.372 test-pipeline: explicitly disabled via build config 00:01:11.372 test-pmd: explicitly disabled via build config 00:01:11.372 test-regex: explicitly disabled via build config 00:01:11.372 test-sad: explicitly disabled via build config 00:01:11.372 test-security-perf: explicitly disabled via build config 00:01:11.372 00:01:11.372 libs: 00:01:11.372 argparse: explicitly disabled via build config 00:01:11.372 metrics: explicitly disabled via build config 00:01:11.372 acl: explicitly disabled via build config 00:01:11.372 bbdev: explicitly disabled via build config 00:01:11.372 bitratestats: explicitly disabled via build config 00:01:11.372 bpf: explicitly disabled via build config 00:01:11.372 cfgfile: explicitly disabled via build config 00:01:11.372 distributor: explicitly disabled via build config 00:01:11.372 efd: explicitly disabled via build config 00:01:11.372 eventdev: explicitly disabled via build config 00:01:11.372 dispatcher: explicitly disabled via build config 00:01:11.372 gpudev: explicitly disabled via build config 00:01:11.372 gro: explicitly disabled via build config 00:01:11.372 gso: explicitly disabled via build config 00:01:11.372 ip_frag: explicitly disabled via build config 00:01:11.372 jobstats: explicitly disabled via build config 00:01:11.372 latencystats: explicitly disabled via build config 00:01:11.372 lpm: explicitly disabled via build config 00:01:11.372 member: explicitly disabled via build config 00:01:11.372 pcapng: explicitly disabled via build config 00:01:11.372 rawdev: explicitly disabled via build config 00:01:11.372 regexdev: explicitly disabled via build config 00:01:11.372 mldev: explicitly disabled via build config 00:01:11.373 rib: explicitly disabled via build config 00:01:11.373 sched: explicitly disabled via build config 00:01:11.373 stack: explicitly disabled via build config 00:01:11.373 ipsec: explicitly disabled via build config 00:01:11.373 pdcp: explicitly disabled via build config 00:01:11.373 fib: explicitly disabled via build config 00:01:11.373 port: explicitly disabled via build config 00:01:11.373 pdump: explicitly disabled via build config 00:01:11.373 table: explicitly disabled via build config 00:01:11.373 pipeline: explicitly disabled via build config 00:01:11.373 graph: explicitly disabled via build config 00:01:11.373 node: explicitly disabled via build config 00:01:11.373 00:01:11.373 drivers: 00:01:11.373 common/cpt: not in enabled drivers build config 00:01:11.373 common/dpaax: not in enabled drivers build config 00:01:11.373 common/iavf: not in enabled drivers build config 00:01:11.373 common/idpf: not in enabled drivers build config 00:01:11.373 common/ionic: not in enabled drivers build config 00:01:11.373 common/mvep: not in enabled drivers build config 00:01:11.373 common/octeontx: not in enabled drivers build config 00:01:11.373 bus/auxiliary: not in enabled drivers build config 00:01:11.373 bus/cdx: not in enabled drivers build config 00:01:11.373 bus/dpaa: not in enabled drivers build config 00:01:11.373 bus/fslmc: not in enabled drivers build config 00:01:11.373 bus/ifpga: not in enabled drivers build config 00:01:11.373 bus/platform: not in enabled drivers build config 00:01:11.373 bus/uacce: not in enabled drivers build config 00:01:11.373 bus/vmbus: not in enabled drivers build config 00:01:11.373 common/cnxk: not in enabled drivers build config 00:01:11.373 common/mlx5: not in enabled drivers build config 00:01:11.373 common/nfp: not in enabled drivers build config 00:01:11.373 common/nitrox: not in enabled drivers build config 00:01:11.373 common/qat: not in enabled drivers build config 00:01:11.373 common/sfc_efx: not in enabled drivers build config 00:01:11.373 mempool/bucket: not in enabled drivers build config 00:01:11.373 mempool/cnxk: not in enabled drivers build config 00:01:11.373 mempool/dpaa: not in enabled drivers build config 00:01:11.373 mempool/dpaa2: not in enabled drivers build config 00:01:11.373 mempool/octeontx: not in enabled drivers build config 00:01:11.373 mempool/stack: not in enabled drivers build config 00:01:11.373 dma/cnxk: not in enabled drivers build config 00:01:11.373 dma/dpaa: not in enabled drivers build config 00:01:11.373 dma/dpaa2: not in enabled drivers build config 00:01:11.373 dma/hisilicon: not in enabled drivers build config 00:01:11.373 dma/idxd: not in enabled drivers build config 00:01:11.373 dma/ioat: not in enabled drivers build config 00:01:11.373 dma/skeleton: not in enabled drivers build config 00:01:11.373 net/af_packet: not in enabled drivers build config 00:01:11.373 net/af_xdp: not in enabled drivers build config 00:01:11.373 net/ark: not in enabled drivers build config 00:01:11.373 net/atlantic: not in enabled drivers build config 00:01:11.373 net/avp: not in enabled drivers build config 00:01:11.373 net/axgbe: not in enabled drivers build config 00:01:11.373 net/bnx2x: not in enabled drivers build config 00:01:11.373 net/bnxt: not in enabled drivers build config 00:01:11.373 net/bonding: not in enabled drivers build config 00:01:11.373 net/cnxk: not in enabled drivers build config 00:01:11.373 net/cpfl: not in enabled drivers build config 00:01:11.373 net/cxgbe: not in enabled drivers build config 00:01:11.373 net/dpaa: not in enabled drivers build config 00:01:11.373 net/dpaa2: not in enabled drivers build config 00:01:11.373 net/e1000: not in enabled drivers build config 00:01:11.373 net/ena: not in enabled drivers build config 00:01:11.373 net/enetc: not in enabled drivers build config 00:01:11.373 net/enetfec: not in enabled drivers build config 00:01:11.373 net/enic: not in enabled drivers build config 00:01:11.373 net/failsafe: not in enabled drivers build config 00:01:11.373 net/fm10k: not in enabled drivers build config 00:01:11.373 net/gve: not in enabled drivers build config 00:01:11.373 net/hinic: not in enabled drivers build config 00:01:11.373 net/hns3: not in enabled drivers build config 00:01:11.373 net/i40e: not in enabled drivers build config 00:01:11.373 net/iavf: not in enabled drivers build config 00:01:11.373 net/ice: not in enabled drivers build config 00:01:11.373 net/idpf: not in enabled drivers build config 00:01:11.373 net/igc: not in enabled drivers build config 00:01:11.373 net/ionic: not in enabled drivers build config 00:01:11.373 net/ipn3ke: not in enabled drivers build config 00:01:11.373 net/ixgbe: not in enabled drivers build config 00:01:11.373 net/mana: not in enabled drivers build config 00:01:11.373 net/memif: not in enabled drivers build config 00:01:11.373 net/mlx4: not in enabled drivers build config 00:01:11.373 net/mlx5: not in enabled drivers build config 00:01:11.373 net/mvneta: not in enabled drivers build config 00:01:11.373 net/mvpp2: not in enabled drivers build config 00:01:11.373 net/netvsc: not in enabled drivers build config 00:01:11.373 net/nfb: not in enabled drivers build config 00:01:11.373 net/nfp: not in enabled drivers build config 00:01:11.373 net/ngbe: not in enabled drivers build config 00:01:11.373 net/null: not in enabled drivers build config 00:01:11.373 net/octeontx: not in enabled drivers build config 00:01:11.373 net/octeon_ep: not in enabled drivers build config 00:01:11.373 net/pcap: not in enabled drivers build config 00:01:11.373 net/pfe: not in enabled drivers build config 00:01:11.373 net/qede: not in enabled drivers build config 00:01:11.373 net/ring: not in enabled drivers build config 00:01:11.373 net/sfc: not in enabled drivers build config 00:01:11.373 net/softnic: not in enabled drivers build config 00:01:11.373 net/tap: not in enabled drivers build config 00:01:11.373 net/thunderx: not in enabled drivers build config 00:01:11.373 net/txgbe: not in enabled drivers build config 00:01:11.373 net/vdev_netvsc: not in enabled drivers build config 00:01:11.373 net/vhost: not in enabled drivers build config 00:01:11.373 net/virtio: not in enabled drivers build config 00:01:11.373 net/vmxnet3: not in enabled drivers build config 00:01:11.373 raw/*: missing internal dependency, "rawdev" 00:01:11.373 crypto/armv8: not in enabled drivers build config 00:01:11.373 crypto/bcmfs: not in enabled drivers build config 00:01:11.373 crypto/caam_jr: not in enabled drivers build config 00:01:11.373 crypto/ccp: not in enabled drivers build config 00:01:11.373 crypto/cnxk: not in enabled drivers build config 00:01:11.373 crypto/dpaa_sec: not in enabled drivers build config 00:01:11.373 crypto/dpaa2_sec: not in enabled drivers build config 00:01:11.373 crypto/ipsec_mb: not in enabled drivers build config 00:01:11.373 crypto/mlx5: not in enabled drivers build config 00:01:11.373 crypto/mvsam: not in enabled drivers build config 00:01:11.373 crypto/nitrox: not in enabled drivers build config 00:01:11.373 crypto/null: not in enabled drivers build config 00:01:11.373 crypto/octeontx: not in enabled drivers build config 00:01:11.373 crypto/openssl: not in enabled drivers build config 00:01:11.373 crypto/scheduler: not in enabled drivers build config 00:01:11.373 crypto/uadk: not in enabled drivers build config 00:01:11.373 crypto/virtio: not in enabled drivers build config 00:01:11.373 compress/isal: not in enabled drivers build config 00:01:11.373 compress/mlx5: not in enabled drivers build config 00:01:11.373 compress/nitrox: not in enabled drivers build config 00:01:11.373 compress/octeontx: not in enabled drivers build config 00:01:11.373 compress/zlib: not in enabled drivers build config 00:01:11.373 regex/*: missing internal dependency, "regexdev" 00:01:11.373 ml/*: missing internal dependency, "mldev" 00:01:11.373 vdpa/ifc: not in enabled drivers build config 00:01:11.373 vdpa/mlx5: not in enabled drivers build config 00:01:11.373 vdpa/nfp: not in enabled drivers build config 00:01:11.373 vdpa/sfc: not in enabled drivers build config 00:01:11.373 event/*: missing internal dependency, "eventdev" 00:01:11.373 baseband/*: missing internal dependency, "bbdev" 00:01:11.373 gpu/*: missing internal dependency, "gpudev" 00:01:11.373 00:01:11.373 00:01:11.373 Build targets in project: 84 00:01:11.373 00:01:11.373 DPDK 24.03.0 00:01:11.373 00:01:11.373 User defined options 00:01:11.373 buildtype : debug 00:01:11.373 default_library : shared 00:01:11.373 libdir : lib 00:01:11.373 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:11.373 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:11.373 c_link_args : 00:01:11.373 cpu_instruction_set: native 00:01:11.373 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:11.373 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:11.373 enable_docs : false 00:01:11.373 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:11.373 enable_kmods : false 00:01:11.373 max_lcores : 128 00:01:11.373 tests : false 00:01:11.373 00:01:11.373 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:11.642 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:11.642 [1/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:11.642 [2/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:11.642 [3/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:11.642 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:11.642 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:11.642 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:11.642 [7/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:11.642 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:11.642 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:11.642 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:11.642 [11/267] Linking static target lib/librte_kvargs.a 00:01:11.642 [12/267] Linking static target lib/librte_log.a 00:01:11.908 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:11.908 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:11.908 [15/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:11.908 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:11.908 [17/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:11.908 [18/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:11.908 [19/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:11.908 [20/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:11.908 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:11.908 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:11.908 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:11.908 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:11.908 [25/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:11.908 [26/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:11.908 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:11.908 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:11.908 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:11.908 [30/267] Linking static target lib/librte_pci.a 00:01:11.908 [31/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:11.908 [32/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:11.908 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:11.908 [34/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:11.908 [35/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:11.908 [36/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:11.908 [37/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:12.169 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:12.169 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:12.169 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:12.169 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:12.169 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:12.169 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:12.169 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:12.169 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:12.169 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:12.169 [47/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.169 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:12.169 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:12.169 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:12.169 [51/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.169 [52/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:12.169 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:12.169 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:12.169 [55/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:12.169 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:12.169 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:12.169 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:12.169 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:12.169 [60/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:12.169 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:12.169 [62/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:12.169 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:12.169 [64/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:12.169 [65/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:12.169 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:12.169 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:12.169 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:12.169 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:12.169 [70/267] Linking static target lib/librte_telemetry.a 00:01:12.169 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:12.169 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:12.169 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:12.169 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:12.169 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:12.169 [76/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:12.169 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:12.169 [78/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:12.169 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:12.169 [80/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:12.169 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:12.169 [82/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:12.169 [83/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:12.169 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:12.169 [85/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:12.169 [86/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:12.169 [87/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:12.169 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:12.169 [89/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:12.169 [90/267] Linking static target lib/librte_meter.a 00:01:12.169 [91/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:12.169 [92/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:12.169 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:12.169 [94/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:12.169 [95/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:12.169 [96/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:12.169 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:12.169 [98/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:12.169 [99/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:12.169 [100/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:12.169 [101/267] Linking static target lib/librte_ring.a 00:01:12.169 [102/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:12.169 [103/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:12.169 [104/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:12.169 [105/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:12.169 [106/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:12.430 [107/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:12.430 [108/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:12.430 [109/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:12.430 [110/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:12.430 [111/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:12.430 [112/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:12.430 [113/267] Linking static target lib/librte_timer.a 00:01:12.430 [114/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:12.430 [115/267] Linking static target lib/librte_net.a 00:01:12.430 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:12.430 [117/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:12.430 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:12.430 [119/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:12.430 [120/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:12.430 [121/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:12.430 [122/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:12.430 [123/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:12.430 [124/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:12.430 [125/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:12.430 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:12.430 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:12.430 [128/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:12.430 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:12.430 [130/267] Linking static target lib/librte_power.a 00:01:12.430 [131/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:12.430 [132/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:12.430 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:12.430 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:12.430 [135/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:12.430 [136/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:12.430 [137/267] Linking static target lib/librte_mempool.a 00:01:12.430 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:12.430 [139/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:12.430 [140/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:12.430 [141/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:12.430 [142/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:12.430 [143/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:12.430 [144/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:12.430 [145/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:12.430 [146/267] Linking static target lib/librte_compressdev.a 00:01:12.430 [147/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:12.430 [148/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:12.430 [149/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:12.430 [150/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.430 [151/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:12.430 [152/267] Linking static target lib/librte_dmadev.a 00:01:12.430 [153/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:12.430 [154/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:12.430 [155/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:12.430 [156/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:12.430 [157/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:12.431 [158/267] Linking target lib/librte_log.so.24.1 00:01:12.431 [159/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:12.431 [160/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:12.431 [161/267] Linking static target lib/librte_cmdline.a 00:01:12.431 [162/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:12.431 [163/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:12.431 [164/267] Linking static target lib/librte_security.a 00:01:12.431 [165/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:12.431 [166/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:12.431 [167/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:12.431 [168/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:12.431 [169/267] Linking static target lib/librte_rcu.a 00:01:12.431 [170/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:12.431 [171/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:12.431 [172/267] Linking static target lib/librte_eal.a 00:01:12.431 [173/267] Linking static target lib/librte_reorder.a 00:01:12.431 [174/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.431 [175/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:12.431 [176/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:12.431 [177/267] Linking static target lib/librte_mbuf.a 00:01:12.431 [178/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:12.431 [179/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:12.431 [180/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:12.431 [181/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:12.431 [182/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.431 [183/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.431 [184/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:12.431 [185/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:12.431 [186/267] Linking target lib/librte_kvargs.so.24.1 00:01:12.431 [187/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:12.431 [188/267] Linking static target lib/librte_hash.a 00:01:12.431 [189/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:12.431 [190/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:12.431 [191/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:12.431 [192/267] Linking static target drivers/librte_bus_vdev.a 00:01:12.431 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:12.431 [194/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:12.692 [195/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.692 [196/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.692 [197/267] Linking target lib/librte_telemetry.so.24.1 00:01:12.692 [198/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:12.692 [199/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:12.692 [200/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:12.692 [201/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:12.692 [202/267] Linking static target drivers/librte_bus_pci.a 00:01:12.692 [203/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.692 [204/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:12.692 [205/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.692 [206/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:12.692 [207/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:12.692 [208/267] Linking static target drivers/librte_mempool_ring.a 00:01:12.692 [209/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:12.692 [210/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:12.692 [211/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.692 [212/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.692 [213/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.692 [214/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:12.692 [215/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.692 [216/267] Linking static target lib/librte_cryptodev.a 00:01:12.692 [217/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:12.692 [218/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.692 [219/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.952 [220/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.952 [221/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.952 [222/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:12.952 [223/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.952 [224/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:12.952 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.952 [226/267] Linking static target lib/librte_ethdev.a 00:01:13.888 [227/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.888 [228/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:13.888 [229/267] Linking static target lib/librte_vhost.a 00:01:14.825 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.362 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.362 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.362 [233/267] Linking target lib/librte_eal.so.24.1 00:01:17.362 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:17.362 [235/267] Linking target lib/librte_ring.so.24.1 00:01:17.362 [236/267] Linking target lib/librte_timer.so.24.1 00:01:17.362 [237/267] Linking target lib/librte_meter.so.24.1 00:01:17.362 [238/267] Linking target drivers/librte_bus_vdev.so.24.1 00:01:17.362 [239/267] Linking target lib/librte_dmadev.so.24.1 00:01:17.362 [240/267] Linking target lib/librte_pci.so.24.1 00:01:17.622 [241/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:17.622 [242/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:17.622 [243/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:17.622 [244/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:17.622 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:17.622 [246/267] Linking target lib/librte_rcu.so.24.1 00:01:17.622 [247/267] Linking target drivers/librte_bus_pci.so.24.1 00:01:17.622 [248/267] Linking target lib/librte_mempool.so.24.1 00:01:17.622 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:17.622 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:17.622 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:01:17.622 [252/267] Linking target lib/librte_mbuf.so.24.1 00:01:17.881 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:17.881 [254/267] Linking target lib/librte_net.so.24.1 00:01:17.881 [255/267] Linking target lib/librte_reorder.so.24.1 00:01:17.881 [256/267] Linking target lib/librte_compressdev.so.24.1 00:01:17.881 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:01:17.881 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:17.881 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:17.881 [260/267] Linking target lib/librte_security.so.24.1 00:01:17.881 [261/267] Linking target lib/librte_hash.so.24.1 00:01:17.881 [262/267] Linking target lib/librte_cmdline.so.24.1 00:01:17.881 [263/267] Linking target lib/librte_ethdev.so.24.1 00:01:17.881 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:17.881 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:18.140 [266/267] Linking target lib/librte_power.so.24.1 00:01:18.140 [267/267] Linking target lib/librte_vhost.so.24.1 00:01:18.140 INFO: autodetecting backend as ninja 00:01:18.140 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:01:30.346 CC lib/log/log.o 00:01:30.346 CC lib/log/log_flags.o 00:01:30.346 CC lib/log/log_deprecated.o 00:01:30.346 CC lib/ut_mock/mock.o 00:01:30.346 CC lib/ut/ut.o 00:01:30.346 LIB libspdk_ut.a 00:01:30.346 LIB libspdk_ut_mock.a 00:01:30.346 LIB libspdk_log.a 00:01:30.346 SO libspdk_ut.so.2.0 00:01:30.346 SO libspdk_ut_mock.so.6.0 00:01:30.346 SO libspdk_log.so.7.1 00:01:30.346 SYMLINK libspdk_ut_mock.so 00:01:30.346 SYMLINK libspdk_ut.so 00:01:30.346 SYMLINK libspdk_log.so 00:01:30.346 CC lib/dma/dma.o 00:01:30.346 CC lib/util/base64.o 00:01:30.346 CC lib/util/bit_array.o 00:01:30.346 CC lib/util/cpuset.o 00:01:30.346 CC lib/util/crc32.o 00:01:30.346 CC lib/util/crc32c.o 00:01:30.346 CC lib/util/crc16.o 00:01:30.346 CXX lib/trace_parser/trace.o 00:01:30.346 CC lib/util/crc32_ieee.o 00:01:30.346 CC lib/util/fd_group.o 00:01:30.346 CC lib/util/fd.o 00:01:30.346 CC lib/util/crc64.o 00:01:30.346 CC lib/util/dif.o 00:01:30.346 CC lib/util/file.o 00:01:30.346 CC lib/util/iov.o 00:01:30.346 CC lib/util/math.o 00:01:30.346 CC lib/util/pipe.o 00:01:30.346 CC lib/util/net.o 00:01:30.346 CC lib/ioat/ioat.o 00:01:30.346 CC lib/util/hexlify.o 00:01:30.346 CC lib/util/strerror_tls.o 00:01:30.346 CC lib/util/string.o 00:01:30.346 CC lib/util/xor.o 00:01:30.346 CC lib/util/uuid.o 00:01:30.346 CC lib/util/md5.o 00:01:30.346 CC lib/util/zipf.o 00:01:30.346 CC lib/vfio_user/host/vfio_user_pci.o 00:01:30.346 CC lib/vfio_user/host/vfio_user.o 00:01:30.346 LIB libspdk_dma.a 00:01:30.346 LIB libspdk_ioat.a 00:01:30.346 SO libspdk_dma.so.5.0 00:01:30.346 SO libspdk_ioat.so.7.0 00:01:30.346 SYMLINK libspdk_dma.so 00:01:30.346 SYMLINK libspdk_ioat.so 00:01:30.346 LIB libspdk_vfio_user.a 00:01:30.346 SO libspdk_vfio_user.so.5.0 00:01:30.346 SYMLINK libspdk_vfio_user.so 00:01:30.346 LIB libspdk_util.a 00:01:30.346 SO libspdk_util.so.10.1 00:01:30.346 SYMLINK libspdk_util.so 00:01:30.346 CC lib/conf/conf.o 00:01:30.346 CC lib/env_dpdk/env.o 00:01:30.346 CC lib/env_dpdk/memory.o 00:01:30.346 CC lib/env_dpdk/pci.o 00:01:30.346 CC lib/env_dpdk/threads.o 00:01:30.346 CC lib/vmd/vmd.o 00:01:30.346 CC lib/env_dpdk/init.o 00:01:30.346 CC lib/rdma_utils/rdma_utils.o 00:01:30.346 CC lib/vmd/led.o 00:01:30.346 CC lib/env_dpdk/pci_ioat.o 00:01:30.346 CC lib/env_dpdk/pci_vmd.o 00:01:30.346 CC lib/env_dpdk/pci_idxd.o 00:01:30.346 CC lib/env_dpdk/sigbus_handler.o 00:01:30.346 CC lib/env_dpdk/pci_virtio.o 00:01:30.346 CC lib/env_dpdk/pci_event.o 00:01:30.346 CC lib/env_dpdk/pci_dpdk.o 00:01:30.346 CC lib/json/json_parse.o 00:01:30.346 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:30.346 CC lib/idxd/idxd.o 00:01:30.346 CC lib/json/json_util.o 00:01:30.346 CC lib/json/json_write.o 00:01:30.346 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:30.346 CC lib/idxd/idxd_user.o 00:01:30.346 CC lib/idxd/idxd_kernel.o 00:01:30.346 LIB libspdk_trace_parser.a 00:01:30.346 SO libspdk_trace_parser.so.6.0 00:01:30.604 SYMLINK libspdk_trace_parser.so 00:01:30.604 LIB libspdk_conf.a 00:01:30.604 SO libspdk_conf.so.6.0 00:01:30.604 LIB libspdk_rdma_utils.a 00:01:30.604 SO libspdk_rdma_utils.so.1.0 00:01:30.604 SYMLINK libspdk_conf.so 00:01:30.604 LIB libspdk_json.a 00:01:30.604 SO libspdk_json.so.6.0 00:01:30.604 SYMLINK libspdk_rdma_utils.so 00:01:30.604 SYMLINK libspdk_json.so 00:01:30.863 LIB libspdk_vmd.a 00:01:30.863 SO libspdk_vmd.so.6.0 00:01:30.863 SYMLINK libspdk_vmd.so 00:01:30.863 CC lib/rdma_provider/common.o 00:01:30.863 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:30.863 LIB libspdk_idxd.a 00:01:30.863 SO libspdk_idxd.so.12.1 00:01:30.863 CC lib/jsonrpc/jsonrpc_server.o 00:01:30.863 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:30.863 CC lib/jsonrpc/jsonrpc_client.o 00:01:30.863 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:30.863 SYMLINK libspdk_idxd.so 00:01:31.121 LIB libspdk_rdma_provider.a 00:01:31.121 SO libspdk_rdma_provider.so.7.0 00:01:31.121 SYMLINK libspdk_rdma_provider.so 00:01:31.121 LIB libspdk_jsonrpc.a 00:01:31.121 SO libspdk_jsonrpc.so.6.0 00:01:31.121 SYMLINK libspdk_jsonrpc.so 00:01:31.380 CC lib/rpc/rpc.o 00:01:31.380 LIB libspdk_env_dpdk.a 00:01:31.640 LIB libspdk_rpc.a 00:01:31.640 SO libspdk_env_dpdk.so.15.1 00:01:31.640 SO libspdk_rpc.so.6.0 00:01:31.640 SYMLINK libspdk_rpc.so 00:01:31.640 SYMLINK libspdk_env_dpdk.so 00:01:31.899 CC lib/trace/trace.o 00:01:31.899 CC lib/trace/trace_rpc.o 00:01:31.899 CC lib/trace/trace_flags.o 00:01:31.899 CC lib/notify/notify.o 00:01:31.899 CC lib/notify/notify_rpc.o 00:01:31.899 CC lib/keyring/keyring.o 00:01:31.899 CC lib/keyring/keyring_rpc.o 00:01:31.899 LIB libspdk_notify.a 00:01:31.899 SO libspdk_notify.so.6.0 00:01:31.899 LIB libspdk_keyring.a 00:01:31.899 SYMLINK libspdk_notify.so 00:01:31.899 LIB libspdk_trace.a 00:01:32.158 SO libspdk_keyring.so.2.0 00:01:32.158 SO libspdk_trace.so.11.0 00:01:32.158 SYMLINK libspdk_keyring.so 00:01:32.158 SYMLINK libspdk_trace.so 00:01:32.416 CC lib/sock/sock.o 00:01:32.416 CC lib/sock/sock_rpc.o 00:01:32.416 CC lib/thread/thread.o 00:01:32.416 CC lib/thread/iobuf.o 00:01:32.416 LIB libspdk_sock.a 00:01:32.676 SO libspdk_sock.so.10.0 00:01:32.676 SYMLINK libspdk_sock.so 00:01:32.676 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:32.935 CC lib/nvme/nvme_ctrlr.o 00:01:32.935 CC lib/nvme/nvme_fabric.o 00:01:32.935 CC lib/nvme/nvme_ns_cmd.o 00:01:32.935 CC lib/nvme/nvme_ns.o 00:01:32.935 CC lib/nvme/nvme_pcie_common.o 00:01:32.935 CC lib/nvme/nvme_pcie.o 00:01:32.935 CC lib/nvme/nvme_qpair.o 00:01:32.935 CC lib/nvme/nvme.o 00:01:32.935 CC lib/nvme/nvme_transport.o 00:01:32.935 CC lib/nvme/nvme_quirks.o 00:01:32.935 CC lib/nvme/nvme_discovery.o 00:01:32.935 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:32.935 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:32.935 CC lib/nvme/nvme_tcp.o 00:01:32.935 CC lib/nvme/nvme_opal.o 00:01:32.935 CC lib/nvme/nvme_io_msg.o 00:01:32.935 CC lib/nvme/nvme_poll_group.o 00:01:32.935 CC lib/nvme/nvme_zns.o 00:01:32.935 CC lib/nvme/nvme_stubs.o 00:01:32.935 CC lib/nvme/nvme_vfio_user.o 00:01:32.935 CC lib/nvme/nvme_auth.o 00:01:32.935 CC lib/nvme/nvme_cuse.o 00:01:32.935 CC lib/nvme/nvme_rdma.o 00:01:33.504 LIB libspdk_thread.a 00:01:33.504 SO libspdk_thread.so.11.0 00:01:33.504 SYMLINK libspdk_thread.so 00:01:33.763 CC lib/accel/accel.o 00:01:33.763 CC lib/accel/accel_sw.o 00:01:33.763 CC lib/accel/accel_rpc.o 00:01:33.763 CC lib/fsdev/fsdev.o 00:01:33.763 CC lib/virtio/virtio.o 00:01:33.763 CC lib/virtio/virtio_vhost_user.o 00:01:33.763 CC lib/init/json_config.o 00:01:33.763 CC lib/fsdev/fsdev_io.o 00:01:33.763 CC lib/virtio/virtio_vfio_user.o 00:01:33.763 CC lib/fsdev/fsdev_rpc.o 00:01:33.763 CC lib/virtio/virtio_pci.o 00:01:33.763 CC lib/init/subsystem.o 00:01:33.763 CC lib/blob/blobstore.o 00:01:33.763 CC lib/init/rpc.o 00:01:33.763 CC lib/blob/request.o 00:01:33.763 CC lib/blob/zeroes.o 00:01:33.763 CC lib/init/subsystem_rpc.o 00:01:33.763 CC lib/vfu_tgt/tgt_rpc.o 00:01:33.763 CC lib/vfu_tgt/tgt_endpoint.o 00:01:33.763 CC lib/blob/blob_bs_dev.o 00:01:34.074 LIB libspdk_init.a 00:01:34.074 SO libspdk_init.so.6.0 00:01:34.074 SYMLINK libspdk_init.so 00:01:34.074 LIB libspdk_virtio.a 00:01:34.074 LIB libspdk_vfu_tgt.a 00:01:34.074 SO libspdk_virtio.so.7.0 00:01:34.074 SO libspdk_vfu_tgt.so.3.0 00:01:34.074 SYMLINK libspdk_virtio.so 00:01:34.074 SYMLINK libspdk_vfu_tgt.so 00:01:34.387 CC lib/event/app.o 00:01:34.387 CC lib/event/reactor.o 00:01:34.387 CC lib/event/log_rpc.o 00:01:34.387 CC lib/event/app_rpc.o 00:01:34.387 CC lib/event/scheduler_static.o 00:01:34.387 LIB libspdk_fsdev.a 00:01:34.387 SO libspdk_fsdev.so.2.0 00:01:34.387 SYMLINK libspdk_fsdev.so 00:01:34.387 LIB libspdk_nvme.a 00:01:34.387 SO libspdk_nvme.so.15.0 00:01:34.387 LIB libspdk_event.a 00:01:34.660 SO libspdk_event.so.14.0 00:01:34.660 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:01:34.660 SYMLINK libspdk_event.so 00:01:34.660 LIB libspdk_accel.a 00:01:34.660 SO libspdk_accel.so.16.0 00:01:34.660 SYMLINK libspdk_nvme.so 00:01:34.660 SYMLINK libspdk_accel.so 00:01:34.918 CC lib/bdev/bdev.o 00:01:34.918 CC lib/bdev/bdev_rpc.o 00:01:34.918 CC lib/bdev/bdev_zone.o 00:01:34.918 CC lib/bdev/part.o 00:01:34.918 CC lib/bdev/scsi_nvme.o 00:01:35.177 LIB libspdk_fuse_dispatcher.a 00:01:35.177 SO libspdk_fuse_dispatcher.so.1.0 00:01:35.177 SYMLINK libspdk_fuse_dispatcher.so 00:01:35.744 LIB libspdk_blob.a 00:01:35.744 SO libspdk_blob.so.11.0 00:01:35.744 SYMLINK libspdk_blob.so 00:01:36.004 CC lib/blobfs/blobfs.o 00:01:36.004 CC lib/blobfs/tree.o 00:01:36.004 CC lib/lvol/lvol.o 00:01:36.572 LIB libspdk_lvol.a 00:01:36.572 SO libspdk_lvol.so.10.0 00:01:36.572 SYMLINK libspdk_lvol.so 00:01:36.572 LIB libspdk_blobfs.a 00:01:36.831 SO libspdk_blobfs.so.10.0 00:01:36.831 SYMLINK libspdk_blobfs.so 00:01:36.831 LIB libspdk_bdev.a 00:01:36.831 SO libspdk_bdev.so.17.0 00:01:36.831 SYMLINK libspdk_bdev.so 00:01:37.090 CC lib/ftl/ftl_core.o 00:01:37.090 CC lib/ftl/ftl_init.o 00:01:37.090 CC lib/ftl/ftl_layout.o 00:01:37.090 CC lib/ftl/ftl_io.o 00:01:37.090 CC lib/ftl/ftl_sb.o 00:01:37.090 CC lib/ftl/ftl_debug.o 00:01:37.090 CC lib/ftl/ftl_l2p.o 00:01:37.090 CC lib/scsi/dev.o 00:01:37.090 CC lib/scsi/lun.o 00:01:37.090 CC lib/ftl/ftl_l2p_flat.o 00:01:37.090 CC lib/scsi/scsi.o 00:01:37.090 CC lib/ftl/ftl_band.o 00:01:37.090 CC lib/scsi/port.o 00:01:37.090 CC lib/ftl/ftl_nv_cache.o 00:01:37.090 CC lib/scsi/scsi_bdev.o 00:01:37.090 CC lib/ftl/ftl_band_ops.o 00:01:37.090 CC lib/ftl/ftl_writer.o 00:01:37.090 CC lib/scsi/scsi_pr.o 00:01:37.090 CC lib/scsi/scsi_rpc.o 00:01:37.090 CC lib/scsi/task.o 00:01:37.090 CC lib/ftl/ftl_l2p_cache.o 00:01:37.090 CC lib/ftl/ftl_reloc.o 00:01:37.090 CC lib/ftl/ftl_p2l_log.o 00:01:37.090 CC lib/ftl/ftl_rq.o 00:01:37.090 CC lib/ftl/ftl_p2l.o 00:01:37.090 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:37.090 CC lib/ftl/mngt/ftl_mngt.o 00:01:37.090 CC lib/nvmf/ctrlr.o 00:01:37.090 CC lib/ublk/ublk_rpc.o 00:01:37.090 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:37.090 CC lib/nvmf/ctrlr_discovery.o 00:01:37.090 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:37.090 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:37.090 CC lib/ublk/ublk.o 00:01:37.090 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:37.090 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:37.090 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:37.090 CC lib/nvmf/subsystem.o 00:01:37.090 CC lib/nvmf/ctrlr_bdev.o 00:01:37.090 CC lib/nvmf/nvmf.o 00:01:37.090 CC lib/nvmf/transport.o 00:01:37.090 CC lib/nvmf/nvmf_rpc.o 00:01:37.090 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:37.090 CC lib/nvmf/tcp.o 00:01:37.090 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:37.090 CC lib/nvmf/stubs.o 00:01:37.090 CC lib/nvmf/mdns_server.o 00:01:37.090 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:37.090 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:37.090 CC lib/nvmf/vfio_user.o 00:01:37.090 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:37.090 CC lib/ftl/utils/ftl_conf.o 00:01:37.090 CC lib/nvmf/rdma.o 00:01:37.090 CC lib/ftl/utils/ftl_mempool.o 00:01:37.090 CC lib/nvmf/auth.o 00:01:37.090 CC lib/ftl/utils/ftl_md.o 00:01:37.090 CC lib/ftl/utils/ftl_property.o 00:01:37.090 CC lib/ftl/utils/ftl_bitmap.o 00:01:37.090 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:37.090 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:37.090 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:37.090 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:37.090 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:37.090 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:37.090 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:37.090 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:37.090 CC lib/nbd/nbd.o 00:01:37.090 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:37.090 CC lib/nbd/nbd_rpc.o 00:01:37.090 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:37.090 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:37.090 CC lib/ftl/base/ftl_base_dev.o 00:01:37.090 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:01:37.090 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:01:37.090 CC lib/ftl/ftl_trace.o 00:01:37.090 CC lib/ftl/base/ftl_base_bdev.o 00:01:37.660 LIB libspdk_nbd.a 00:01:37.660 SO libspdk_nbd.so.7.0 00:01:37.660 LIB libspdk_scsi.a 00:01:37.660 SYMLINK libspdk_nbd.so 00:01:37.660 SO libspdk_scsi.so.9.0 00:01:37.660 SYMLINK libspdk_scsi.so 00:01:37.660 LIB libspdk_ublk.a 00:01:37.660 SO libspdk_ublk.so.3.0 00:01:37.919 SYMLINK libspdk_ublk.so 00:01:37.919 LIB libspdk_ftl.a 00:01:37.919 CC lib/vhost/vhost.o 00:01:37.919 CC lib/vhost/vhost_scsi.o 00:01:37.919 CC lib/vhost/vhost_rpc.o 00:01:37.919 CC lib/vhost/vhost_blk.o 00:01:37.919 CC lib/vhost/rte_vhost_user.o 00:01:37.919 CC lib/iscsi/conn.o 00:01:37.919 CC lib/iscsi/init_grp.o 00:01:37.919 CC lib/iscsi/param.o 00:01:37.919 CC lib/iscsi/iscsi.o 00:01:37.919 CC lib/iscsi/portal_grp.o 00:01:37.919 CC lib/iscsi/tgt_node.o 00:01:37.919 CC lib/iscsi/task.o 00:01:37.919 CC lib/iscsi/iscsi_rpc.o 00:01:37.919 CC lib/iscsi/iscsi_subsystem.o 00:01:37.919 SO libspdk_ftl.so.9.0 00:01:38.178 SYMLINK libspdk_ftl.so 00:01:38.437 LIB libspdk_nvmf.a 00:01:38.437 SO libspdk_nvmf.so.20.0 00:01:38.696 SYMLINK libspdk_nvmf.so 00:01:38.696 LIB libspdk_vhost.a 00:01:38.696 SO libspdk_vhost.so.8.0 00:01:38.955 SYMLINK libspdk_vhost.so 00:01:38.955 LIB libspdk_iscsi.a 00:01:38.955 SO libspdk_iscsi.so.8.0 00:01:39.214 SYMLINK libspdk_iscsi.so 00:01:39.474 CC module/vfu_device/vfu_virtio.o 00:01:39.474 CC module/vfu_device/vfu_virtio_rpc.o 00:01:39.474 CC module/vfu_device/vfu_virtio_fs.o 00:01:39.474 CC module/vfu_device/vfu_virtio_blk.o 00:01:39.474 CC module/vfu_device/vfu_virtio_scsi.o 00:01:39.474 CC module/env_dpdk/env_dpdk_rpc.o 00:01:39.474 CC module/accel/error/accel_error.o 00:01:39.474 CC module/accel/error/accel_error_rpc.o 00:01:39.474 CC module/accel/dsa/accel_dsa.o 00:01:39.474 CC module/accel/dsa/accel_dsa_rpc.o 00:01:39.474 CC module/accel/iaa/accel_iaa.o 00:01:39.474 CC module/accel/iaa/accel_iaa_rpc.o 00:01:39.474 CC module/keyring/linux/keyring.o 00:01:39.474 CC module/keyring/linux/keyring_rpc.o 00:01:39.474 CC module/sock/posix/posix.o 00:01:39.474 CC module/fsdev/aio/fsdev_aio.o 00:01:39.474 CC module/fsdev/aio/linux_aio_mgr.o 00:01:39.474 CC module/keyring/file/keyring.o 00:01:39.474 CC module/fsdev/aio/fsdev_aio_rpc.o 00:01:39.474 CC module/keyring/file/keyring_rpc.o 00:01:39.474 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:39.474 CC module/blob/bdev/blob_bdev.o 00:01:39.474 CC module/accel/ioat/accel_ioat.o 00:01:39.474 CC module/accel/ioat/accel_ioat_rpc.o 00:01:39.474 CC module/scheduler/gscheduler/gscheduler.o 00:01:39.474 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:39.474 LIB libspdk_env_dpdk_rpc.a 00:01:39.474 SO libspdk_env_dpdk_rpc.so.6.0 00:01:39.474 SYMLINK libspdk_env_dpdk_rpc.so 00:01:39.474 LIB libspdk_scheduler_gscheduler.a 00:01:39.474 LIB libspdk_keyring_linux.a 00:01:39.474 SO libspdk_scheduler_gscheduler.so.4.0 00:01:39.474 LIB libspdk_keyring_file.a 00:01:39.734 SO libspdk_keyring_linux.so.1.0 00:01:39.734 LIB libspdk_accel_error.a 00:01:39.734 LIB libspdk_scheduler_dpdk_governor.a 00:01:39.734 SO libspdk_keyring_file.so.2.0 00:01:39.734 SYMLINK libspdk_scheduler_gscheduler.so 00:01:39.734 LIB libspdk_accel_ioat.a 00:01:39.734 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:39.734 LIB libspdk_accel_iaa.a 00:01:39.734 SO libspdk_accel_error.so.2.0 00:01:39.734 LIB libspdk_scheduler_dynamic.a 00:01:39.734 SO libspdk_accel_ioat.so.6.0 00:01:39.734 SYMLINK libspdk_keyring_linux.so 00:01:39.734 SO libspdk_accel_iaa.so.3.0 00:01:39.734 SYMLINK libspdk_keyring_file.so 00:01:39.734 SO libspdk_scheduler_dynamic.so.4.0 00:01:39.734 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:39.734 SYMLINK libspdk_accel_error.so 00:01:39.734 LIB libspdk_accel_dsa.a 00:01:39.734 SYMLINK libspdk_accel_ioat.so 00:01:39.734 LIB libspdk_blob_bdev.a 00:01:39.734 SYMLINK libspdk_scheduler_dynamic.so 00:01:39.734 SYMLINK libspdk_accel_iaa.so 00:01:39.734 SO libspdk_accel_dsa.so.5.0 00:01:39.734 SO libspdk_blob_bdev.so.11.0 00:01:39.734 SYMLINK libspdk_blob_bdev.so 00:01:39.734 SYMLINK libspdk_accel_dsa.so 00:01:39.734 LIB libspdk_fsdev_aio.a 00:01:39.992 SO libspdk_fsdev_aio.so.1.0 00:01:39.992 LIB libspdk_vfu_device.a 00:01:39.992 SO libspdk_vfu_device.so.3.0 00:01:39.992 SYMLINK libspdk_fsdev_aio.so 00:01:39.992 SYMLINK libspdk_vfu_device.so 00:01:39.992 CC module/bdev/nvme/bdev_nvme.o 00:01:39.992 CC module/bdev/delay/vbdev_delay.o 00:01:39.992 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:39.992 CC module/bdev/nvme/nvme_rpc.o 00:01:39.992 CC module/bdev/gpt/gpt.o 00:01:39.992 CC module/bdev/nvme/bdev_mdns_client.o 00:01:39.992 CC module/bdev/malloc/bdev_malloc.o 00:01:39.992 CC module/bdev/gpt/vbdev_gpt.o 00:01:39.992 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:39.992 CC module/bdev/nvme/vbdev_opal.o 00:01:39.992 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:39.992 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:39.992 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:39.992 CC module/bdev/aio/bdev_aio.o 00:01:39.992 CC module/bdev/aio/bdev_aio_rpc.o 00:01:39.992 CC module/bdev/error/vbdev_error_rpc.o 00:01:39.992 CC module/bdev/error/vbdev_error.o 00:01:39.992 CC module/bdev/lvol/vbdev_lvol.o 00:01:39.992 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:39.992 CC module/bdev/passthru/vbdev_passthru.o 00:01:39.992 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:39.992 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:39.992 CC module/bdev/ftl/bdev_ftl.o 00:01:39.992 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:39.992 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:39.992 CC module/bdev/raid/bdev_raid.o 00:01:39.992 CC module/bdev/null/bdev_null.o 00:01:39.992 CC module/bdev/null/bdev_null_rpc.o 00:01:39.992 CC module/bdev/raid/bdev_raid_rpc.o 00:01:39.992 CC module/bdev/raid/bdev_raid_sb.o 00:01:39.992 CC module/bdev/raid/raid0.o 00:01:39.992 CC module/bdev/raid/raid1.o 00:01:39.992 CC module/bdev/iscsi/bdev_iscsi.o 00:01:39.992 CC module/bdev/raid/concat.o 00:01:39.992 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:39.992 CC module/bdev/split/vbdev_split.o 00:01:39.992 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:39.992 CC module/blobfs/bdev/blobfs_bdev.o 00:01:39.992 CC module/bdev/split/vbdev_split_rpc.o 00:01:39.992 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:39.992 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:39.992 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:39.992 LIB libspdk_sock_posix.a 00:01:39.992 SO libspdk_sock_posix.so.6.0 00:01:40.251 SYMLINK libspdk_sock_posix.so 00:01:40.251 LIB libspdk_bdev_split.a 00:01:40.251 LIB libspdk_blobfs_bdev.a 00:01:40.251 LIB libspdk_bdev_error.a 00:01:40.251 SO libspdk_bdev_split.so.6.0 00:01:40.251 LIB libspdk_bdev_null.a 00:01:40.251 SO libspdk_blobfs_bdev.so.6.0 00:01:40.251 LIB libspdk_bdev_passthru.a 00:01:40.251 SO libspdk_bdev_error.so.6.0 00:01:40.251 SO libspdk_bdev_null.so.6.0 00:01:40.251 SO libspdk_bdev_passthru.so.6.0 00:01:40.251 LIB libspdk_bdev_aio.a 00:01:40.251 SYMLINK libspdk_bdev_split.so 00:01:40.251 LIB libspdk_bdev_gpt.a 00:01:40.251 SYMLINK libspdk_bdev_null.so 00:01:40.251 SYMLINK libspdk_blobfs_bdev.so 00:01:40.251 SO libspdk_bdev_aio.so.6.0 00:01:40.251 LIB libspdk_bdev_delay.a 00:01:40.251 SYMLINK libspdk_bdev_error.so 00:01:40.251 SO libspdk_bdev_gpt.so.6.0 00:01:40.251 LIB libspdk_bdev_ftl.a 00:01:40.251 SYMLINK libspdk_bdev_passthru.so 00:01:40.251 SO libspdk_bdev_delay.so.6.0 00:01:40.251 SYMLINK libspdk_bdev_aio.so 00:01:40.251 SO libspdk_bdev_ftl.so.6.0 00:01:40.251 LIB libspdk_bdev_zone_block.a 00:01:40.251 LIB libspdk_bdev_malloc.a 00:01:40.251 SYMLINK libspdk_bdev_gpt.so 00:01:40.251 SYMLINK libspdk_bdev_delay.so 00:01:40.251 SO libspdk_bdev_zone_block.so.6.0 00:01:40.251 SO libspdk_bdev_malloc.so.6.0 00:01:40.510 LIB libspdk_bdev_iscsi.a 00:01:40.510 SYMLINK libspdk_bdev_ftl.so 00:01:40.510 SO libspdk_bdev_iscsi.so.6.0 00:01:40.510 SYMLINK libspdk_bdev_malloc.so 00:01:40.510 SYMLINK libspdk_bdev_zone_block.so 00:01:40.510 SYMLINK libspdk_bdev_iscsi.so 00:01:40.510 LIB libspdk_bdev_lvol.a 00:01:40.510 SO libspdk_bdev_lvol.so.6.0 00:01:40.510 LIB libspdk_bdev_virtio.a 00:01:40.510 SO libspdk_bdev_virtio.so.6.0 00:01:40.510 SYMLINK libspdk_bdev_lvol.so 00:01:40.510 SYMLINK libspdk_bdev_virtio.so 00:01:40.829 LIB libspdk_bdev_raid.a 00:01:40.829 SO libspdk_bdev_raid.so.6.0 00:01:40.829 SYMLINK libspdk_bdev_raid.so 00:01:41.398 LIB libspdk_bdev_nvme.a 00:01:41.398 SO libspdk_bdev_nvme.so.7.1 00:01:41.657 SYMLINK libspdk_bdev_nvme.so 00:01:41.917 CC module/event/subsystems/iobuf/iobuf.o 00:01:41.917 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:41.917 CC module/event/subsystems/vmd/vmd.o 00:01:41.917 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:41.917 CC module/event/subsystems/scheduler/scheduler.o 00:01:41.917 CC module/event/subsystems/fsdev/fsdev.o 00:01:41.917 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:41.917 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:41.917 CC module/event/subsystems/sock/sock.o 00:01:41.917 CC module/event/subsystems/keyring/keyring.o 00:01:41.917 LIB libspdk_event_scheduler.a 00:01:42.175 LIB libspdk_event_vhost_blk.a 00:01:42.175 SO libspdk_event_scheduler.so.4.0 00:01:42.175 SO libspdk_event_vhost_blk.so.3.0 00:01:42.175 LIB libspdk_event_keyring.a 00:01:42.175 LIB libspdk_event_fsdev.a 00:01:42.175 LIB libspdk_event_vmd.a 00:01:42.175 LIB libspdk_event_vfu_tgt.a 00:01:42.175 LIB libspdk_event_sock.a 00:01:42.175 LIB libspdk_event_iobuf.a 00:01:42.175 SO libspdk_event_keyring.so.1.0 00:01:42.175 SO libspdk_event_vmd.so.6.0 00:01:42.175 SO libspdk_event_fsdev.so.1.0 00:01:42.175 SO libspdk_event_vfu_tgt.so.3.0 00:01:42.175 SO libspdk_event_sock.so.5.0 00:01:42.175 SO libspdk_event_iobuf.so.3.0 00:01:42.175 SYMLINK libspdk_event_scheduler.so 00:01:42.175 SYMLINK libspdk_event_vhost_blk.so 00:01:42.175 SYMLINK libspdk_event_vmd.so 00:01:42.175 SYMLINK libspdk_event_vfu_tgt.so 00:01:42.175 SYMLINK libspdk_event_keyring.so 00:01:42.175 SYMLINK libspdk_event_sock.so 00:01:42.175 SYMLINK libspdk_event_fsdev.so 00:01:42.175 SYMLINK libspdk_event_iobuf.so 00:01:42.175 CC module/event/subsystems/accel/accel.o 00:01:42.433 LIB libspdk_event_accel.a 00:01:42.433 SO libspdk_event_accel.so.6.0 00:01:42.433 SYMLINK libspdk_event_accel.so 00:01:42.692 CC module/event/subsystems/bdev/bdev.o 00:01:42.692 LIB libspdk_event_bdev.a 00:01:42.692 SO libspdk_event_bdev.so.6.0 00:01:42.951 SYMLINK libspdk_event_bdev.so 00:01:42.951 CC module/event/subsystems/nbd/nbd.o 00:01:42.951 CC module/event/subsystems/scsi/scsi.o 00:01:42.951 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:42.951 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:42.951 CC module/event/subsystems/ublk/ublk.o 00:01:43.210 LIB libspdk_event_nbd.a 00:01:43.210 LIB libspdk_event_ublk.a 00:01:43.210 LIB libspdk_event_scsi.a 00:01:43.210 SO libspdk_event_nbd.so.6.0 00:01:43.210 SO libspdk_event_ublk.so.3.0 00:01:43.210 SO libspdk_event_scsi.so.6.0 00:01:43.210 SYMLINK libspdk_event_ublk.so 00:01:43.210 SYMLINK libspdk_event_nbd.so 00:01:43.210 SYMLINK libspdk_event_scsi.so 00:01:43.210 LIB libspdk_event_nvmf.a 00:01:43.210 SO libspdk_event_nvmf.so.6.0 00:01:43.210 SYMLINK libspdk_event_nvmf.so 00:01:43.469 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:43.469 CC module/event/subsystems/iscsi/iscsi.o 00:01:43.469 LIB libspdk_event_vhost_scsi.a 00:01:43.469 SO libspdk_event_vhost_scsi.so.3.0 00:01:43.469 LIB libspdk_event_iscsi.a 00:01:43.469 SO libspdk_event_iscsi.so.6.0 00:01:43.469 SYMLINK libspdk_event_vhost_scsi.so 00:01:43.728 SYMLINK libspdk_event_iscsi.so 00:01:43.728 SO libspdk.so.6.0 00:01:43.728 SYMLINK libspdk.so 00:01:43.987 CXX app/trace/trace.o 00:01:43.987 CC app/spdk_nvme_discover/discovery_aer.o 00:01:43.987 CC app/spdk_top/spdk_top.o 00:01:43.987 CC app/spdk_nvme_identify/identify.o 00:01:43.987 CC app/spdk_lspci/spdk_lspci.o 00:01:43.987 CC app/trace_record/trace_record.o 00:01:43.987 CC test/rpc_client/rpc_client_test.o 00:01:43.987 CC app/spdk_nvme_perf/perf.o 00:01:43.987 TEST_HEADER include/spdk/accel_module.h 00:01:43.987 TEST_HEADER include/spdk/assert.h 00:01:43.987 TEST_HEADER include/spdk/accel.h 00:01:43.987 TEST_HEADER include/spdk/barrier.h 00:01:43.987 TEST_HEADER include/spdk/base64.h 00:01:43.987 TEST_HEADER include/spdk/bdev.h 00:01:43.987 TEST_HEADER include/spdk/bdev_module.h 00:01:43.987 TEST_HEADER include/spdk/bdev_zone.h 00:01:43.987 TEST_HEADER include/spdk/bit_array.h 00:01:43.987 TEST_HEADER include/spdk/bit_pool.h 00:01:43.987 TEST_HEADER include/spdk/blob_bdev.h 00:01:43.987 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:43.987 TEST_HEADER include/spdk/blobfs.h 00:01:43.987 TEST_HEADER include/spdk/blob.h 00:01:43.987 TEST_HEADER include/spdk/conf.h 00:01:43.987 TEST_HEADER include/spdk/cpuset.h 00:01:43.987 TEST_HEADER include/spdk/crc16.h 00:01:43.987 TEST_HEADER include/spdk/crc32.h 00:01:43.987 TEST_HEADER include/spdk/config.h 00:01:43.987 TEST_HEADER include/spdk/crc64.h 00:01:43.987 TEST_HEADER include/spdk/dma.h 00:01:43.987 TEST_HEADER include/spdk/dif.h 00:01:43.987 TEST_HEADER include/spdk/endian.h 00:01:43.987 TEST_HEADER include/spdk/env_dpdk.h 00:01:43.987 TEST_HEADER include/spdk/event.h 00:01:43.987 TEST_HEADER include/spdk/env.h 00:01:43.987 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:43.987 TEST_HEADER include/spdk/fd_group.h 00:01:43.987 TEST_HEADER include/spdk/fd.h 00:01:43.987 TEST_HEADER include/spdk/file.h 00:01:43.987 TEST_HEADER include/spdk/fsdev.h 00:01:43.987 CC app/spdk_dd/spdk_dd.o 00:01:43.987 TEST_HEADER include/spdk/fsdev_module.h 00:01:43.987 TEST_HEADER include/spdk/ftl.h 00:01:43.987 TEST_HEADER include/spdk/fuse_dispatcher.h 00:01:43.987 TEST_HEADER include/spdk/gpt_spec.h 00:01:43.987 TEST_HEADER include/spdk/hexlify.h 00:01:43.987 TEST_HEADER include/spdk/histogram_data.h 00:01:43.987 TEST_HEADER include/spdk/idxd.h 00:01:43.987 TEST_HEADER include/spdk/idxd_spec.h 00:01:43.987 TEST_HEADER include/spdk/init.h 00:01:43.987 TEST_HEADER include/spdk/ioat.h 00:01:43.987 TEST_HEADER include/spdk/ioat_spec.h 00:01:43.987 TEST_HEADER include/spdk/iscsi_spec.h 00:01:43.987 TEST_HEADER include/spdk/json.h 00:01:43.987 TEST_HEADER include/spdk/jsonrpc.h 00:01:43.987 TEST_HEADER include/spdk/keyring.h 00:01:43.987 TEST_HEADER include/spdk/keyring_module.h 00:01:43.987 TEST_HEADER include/spdk/likely.h 00:01:43.987 TEST_HEADER include/spdk/log.h 00:01:43.987 TEST_HEADER include/spdk/lvol.h 00:01:43.987 TEST_HEADER include/spdk/memory.h 00:01:43.987 TEST_HEADER include/spdk/nbd.h 00:01:43.987 TEST_HEADER include/spdk/mmio.h 00:01:43.987 TEST_HEADER include/spdk/net.h 00:01:43.987 TEST_HEADER include/spdk/md5.h 00:01:43.987 TEST_HEADER include/spdk/notify.h 00:01:43.987 TEST_HEADER include/spdk/nvme.h 00:01:43.987 TEST_HEADER include/spdk/nvme_intel.h 00:01:43.987 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:43.987 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:43.987 TEST_HEADER include/spdk/nvme_spec.h 00:01:43.987 TEST_HEADER include/spdk/nvme_zns.h 00:01:43.987 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:43.987 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:43.987 CC app/iscsi_tgt/iscsi_tgt.o 00:01:43.987 TEST_HEADER include/spdk/nvmf.h 00:01:43.987 TEST_HEADER include/spdk/nvmf_spec.h 00:01:43.987 TEST_HEADER include/spdk/nvmf_transport.h 00:01:43.987 TEST_HEADER include/spdk/opal.h 00:01:43.987 TEST_HEADER include/spdk/opal_spec.h 00:01:43.987 TEST_HEADER include/spdk/pci_ids.h 00:01:43.987 TEST_HEADER include/spdk/pipe.h 00:01:43.987 TEST_HEADER include/spdk/reduce.h 00:01:43.987 TEST_HEADER include/spdk/queue.h 00:01:43.988 TEST_HEADER include/spdk/rpc.h 00:01:43.988 TEST_HEADER include/spdk/scheduler.h 00:01:43.988 TEST_HEADER include/spdk/scsi.h 00:01:43.988 TEST_HEADER include/spdk/scsi_spec.h 00:01:43.988 TEST_HEADER include/spdk/sock.h 00:01:43.988 TEST_HEADER include/spdk/stdinc.h 00:01:43.988 TEST_HEADER include/spdk/string.h 00:01:43.988 TEST_HEADER include/spdk/thread.h 00:01:43.988 TEST_HEADER include/spdk/trace.h 00:01:43.988 TEST_HEADER include/spdk/trace_parser.h 00:01:43.988 TEST_HEADER include/spdk/tree.h 00:01:43.988 TEST_HEADER include/spdk/ublk.h 00:01:43.988 TEST_HEADER include/spdk/util.h 00:01:43.988 TEST_HEADER include/spdk/uuid.h 00:01:43.988 TEST_HEADER include/spdk/version.h 00:01:43.988 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:43.988 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:43.988 TEST_HEADER include/spdk/vhost.h 00:01:43.988 CC app/spdk_tgt/spdk_tgt.o 00:01:43.988 TEST_HEADER include/spdk/vmd.h 00:01:43.988 TEST_HEADER include/spdk/xor.h 00:01:43.988 TEST_HEADER include/spdk/zipf.h 00:01:43.988 CXX test/cpp_headers/accel.o 00:01:43.988 CXX test/cpp_headers/accel_module.o 00:01:43.988 CXX test/cpp_headers/assert.o 00:01:43.988 CXX test/cpp_headers/barrier.o 00:01:43.988 CXX test/cpp_headers/base64.o 00:01:43.988 CXX test/cpp_headers/bdev.o 00:01:43.988 CXX test/cpp_headers/bdev_zone.o 00:01:43.988 CXX test/cpp_headers/bdev_module.o 00:01:43.988 CXX test/cpp_headers/bit_array.o 00:01:43.988 CXX test/cpp_headers/bit_pool.o 00:01:43.988 CXX test/cpp_headers/blob_bdev.o 00:01:43.988 CXX test/cpp_headers/blobfs_bdev.o 00:01:43.988 CXX test/cpp_headers/blobfs.o 00:01:43.988 CXX test/cpp_headers/blob.o 00:01:43.988 CXX test/cpp_headers/conf.o 00:01:43.988 CXX test/cpp_headers/config.o 00:01:43.988 CXX test/cpp_headers/cpuset.o 00:01:43.988 CXX test/cpp_headers/crc16.o 00:01:43.988 CXX test/cpp_headers/crc64.o 00:01:43.988 CXX test/cpp_headers/crc32.o 00:01:43.988 CXX test/cpp_headers/dif.o 00:01:43.988 CXX test/cpp_headers/dma.o 00:01:43.988 CXX test/cpp_headers/endian.o 00:01:43.988 CXX test/cpp_headers/env_dpdk.o 00:01:43.988 CXX test/cpp_headers/event.o 00:01:43.988 CXX test/cpp_headers/env.o 00:01:43.988 CXX test/cpp_headers/fd_group.o 00:01:43.988 CC app/nvmf_tgt/nvmf_main.o 00:01:43.988 CXX test/cpp_headers/fd.o 00:01:43.988 CXX test/cpp_headers/fsdev.o 00:01:43.988 CXX test/cpp_headers/file.o 00:01:43.988 CXX test/cpp_headers/fsdev_module.o 00:01:43.988 CXX test/cpp_headers/ftl.o 00:01:43.988 CXX test/cpp_headers/fuse_dispatcher.o 00:01:43.988 CXX test/cpp_headers/gpt_spec.o 00:01:43.988 CXX test/cpp_headers/hexlify.o 00:01:43.988 CXX test/cpp_headers/histogram_data.o 00:01:43.988 CXX test/cpp_headers/idxd.o 00:01:43.988 CXX test/cpp_headers/init.o 00:01:43.988 CXX test/cpp_headers/idxd_spec.o 00:01:43.988 CXX test/cpp_headers/ioat.o 00:01:43.988 CXX test/cpp_headers/ioat_spec.o 00:01:43.988 CXX test/cpp_headers/iscsi_spec.o 00:01:43.988 CXX test/cpp_headers/json.o 00:01:43.988 CXX test/cpp_headers/jsonrpc.o 00:01:43.988 CXX test/cpp_headers/keyring.o 00:01:43.988 CXX test/cpp_headers/likely.o 00:01:43.988 CXX test/cpp_headers/keyring_module.o 00:01:43.988 CXX test/cpp_headers/log.o 00:01:43.988 CXX test/cpp_headers/lvol.o 00:01:43.988 CC examples/util/zipf/zipf.o 00:01:43.988 CXX test/cpp_headers/memory.o 00:01:43.988 CXX test/cpp_headers/md5.o 00:01:43.988 CXX test/cpp_headers/nbd.o 00:01:43.988 CXX test/cpp_headers/mmio.o 00:01:43.988 CXX test/cpp_headers/net.o 00:01:43.988 CXX test/cpp_headers/notify.o 00:01:43.988 CC test/env/vtophys/vtophys.o 00:01:43.988 CXX test/cpp_headers/nvme.o 00:01:43.988 CC test/thread/poller_perf/poller_perf.o 00:01:43.988 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:43.988 CXX test/cpp_headers/nvme_intel.o 00:01:43.988 CXX test/cpp_headers/nvme_ocssd.o 00:01:43.988 CXX test/cpp_headers/nvmf_cmd.o 00:01:43.988 CC examples/ioat/perf/perf.o 00:01:43.988 CXX test/cpp_headers/nvme_spec.o 00:01:43.988 CC app/fio/nvme/fio_plugin.o 00:01:43.988 CXX test/cpp_headers/nvme_zns.o 00:01:43.988 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:43.988 CXX test/cpp_headers/nvmf.o 00:01:43.988 CC test/app/histogram_perf/histogram_perf.o 00:01:43.988 CXX test/cpp_headers/pci_ids.o 00:01:43.988 CXX test/cpp_headers/nvmf_spec.o 00:01:43.988 CXX test/cpp_headers/nvmf_transport.o 00:01:43.988 CXX test/cpp_headers/opal.o 00:01:43.988 CC examples/ioat/verify/verify.o 00:01:43.988 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:43.988 CXX test/cpp_headers/opal_spec.o 00:01:43.988 CXX test/cpp_headers/pipe.o 00:01:43.988 CXX test/cpp_headers/queue.o 00:01:43.988 CC test/env/pci/pci_ut.o 00:01:43.988 CC test/app/jsoncat/jsoncat.o 00:01:43.988 CC test/env/memory/memory_ut.o 00:01:43.988 CXX test/cpp_headers/reduce.o 00:01:43.988 CXX test/cpp_headers/scsi.o 00:01:43.988 CXX test/cpp_headers/rpc.o 00:01:43.988 CC test/app/stub/stub.o 00:01:43.988 CXX test/cpp_headers/scheduler.o 00:01:43.988 CXX test/cpp_headers/scsi_spec.o 00:01:43.988 CXX test/cpp_headers/stdinc.o 00:01:43.988 CXX test/cpp_headers/sock.o 00:01:43.988 CXX test/cpp_headers/string.o 00:01:43.988 CXX test/cpp_headers/thread.o 00:01:43.988 CXX test/cpp_headers/tree.o 00:01:43.988 CXX test/cpp_headers/trace.o 00:01:43.988 CXX test/cpp_headers/ublk.o 00:01:43.988 CXX test/cpp_headers/trace_parser.o 00:01:43.988 CXX test/cpp_headers/util.o 00:01:43.988 CXX test/cpp_headers/vfio_user_pci.o 00:01:43.988 CXX test/cpp_headers/uuid.o 00:01:43.988 CXX test/cpp_headers/version.o 00:01:43.988 CXX test/cpp_headers/vfio_user_spec.o 00:01:43.988 CXX test/cpp_headers/vhost.o 00:01:43.988 CXX test/cpp_headers/xor.o 00:01:43.988 CXX test/cpp_headers/vmd.o 00:01:43.988 CXX test/cpp_headers/zipf.o 00:01:43.988 CC test/dma/test_dma/test_dma.o 00:01:43.988 CC app/fio/bdev/fio_plugin.o 00:01:43.988 CC test/app/bdev_svc/bdev_svc.o 00:01:44.251 LINK rpc_client_test 00:01:44.251 LINK spdk_lspci 00:01:44.251 LINK spdk_nvme_discover 00:01:44.251 LINK interrupt_tgt 00:01:44.251 CC test/env/mem_callbacks/mem_callbacks.o 00:01:44.251 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:44.251 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:44.509 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:44.509 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:44.509 LINK nvmf_tgt 00:01:44.509 LINK iscsi_tgt 00:01:44.510 LINK vtophys 00:01:44.510 LINK spdk_trace_record 00:01:44.510 LINK poller_perf 00:01:44.510 LINK spdk_tgt 00:01:44.510 LINK zipf 00:01:44.510 LINK env_dpdk_post_init 00:01:44.510 LINK jsoncat 00:01:44.510 LINK histogram_perf 00:01:44.767 LINK stub 00:01:44.767 LINK spdk_trace 00:01:44.767 LINK spdk_dd 00:01:44.767 LINK verify 00:01:44.767 LINK bdev_svc 00:01:44.767 LINK ioat_perf 00:01:44.767 CC test/event/event_perf/event_perf.o 00:01:44.767 CC examples/vmd/led/led.o 00:01:44.767 CC test/event/reactor_perf/reactor_perf.o 00:01:44.767 CC examples/sock/hello_world/hello_sock.o 00:01:44.767 CC examples/vmd/lsvmd/lsvmd.o 00:01:44.767 CC test/event/reactor/reactor.o 00:01:44.767 CC examples/idxd/perf/perf.o 00:01:45.026 CC test/event/app_repeat/app_repeat.o 00:01:45.026 LINK test_dma 00:01:45.026 CC test/event/scheduler/scheduler.o 00:01:45.026 LINK nvme_fuzz 00:01:45.026 CC examples/thread/thread/thread_ex.o 00:01:45.026 LINK spdk_bdev 00:01:45.026 LINK spdk_nvme 00:01:45.026 LINK pci_ut 00:01:45.026 CC app/vhost/vhost.o 00:01:45.026 LINK spdk_nvme_perf 00:01:45.026 LINK lsvmd 00:01:45.026 LINK event_perf 00:01:45.026 LINK led 00:01:45.026 LINK mem_callbacks 00:01:45.026 LINK vhost_fuzz 00:01:45.026 LINK app_repeat 00:01:45.026 LINK reactor 00:01:45.026 LINK reactor_perf 00:01:45.026 LINK spdk_top 00:01:45.026 LINK spdk_nvme_identify 00:01:45.026 LINK scheduler 00:01:45.026 LINK hello_sock 00:01:45.026 LINK vhost 00:01:45.286 LINK thread 00:01:45.286 LINK idxd_perf 00:01:45.286 CC test/nvme/aer/aer.o 00:01:45.286 CC test/nvme/e2edp/nvme_dp.o 00:01:45.286 CC test/nvme/startup/startup.o 00:01:45.286 CC test/nvme/fused_ordering/fused_ordering.o 00:01:45.286 CC test/nvme/overhead/overhead.o 00:01:45.286 CC test/nvme/reserve/reserve.o 00:01:45.286 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:45.286 CC test/nvme/sgl/sgl.o 00:01:45.286 CC test/nvme/reset/reset.o 00:01:45.286 CC test/nvme/boot_partition/boot_partition.o 00:01:45.286 CC test/nvme/connect_stress/connect_stress.o 00:01:45.286 CC test/nvme/fdp/fdp.o 00:01:45.286 CC test/nvme/simple_copy/simple_copy.o 00:01:45.286 CC test/nvme/compliance/nvme_compliance.o 00:01:45.286 CC test/nvme/err_injection/err_injection.o 00:01:45.286 CC test/nvme/cuse/cuse.o 00:01:45.286 CC test/accel/dif/dif.o 00:01:45.286 CC test/blobfs/mkfs/mkfs.o 00:01:45.286 CC test/lvol/esnap/esnap.o 00:01:45.287 LINK startup 00:01:45.287 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:45.287 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:45.287 CC examples/nvme/hello_world/hello_world.o 00:01:45.287 CC examples/nvme/reconnect/reconnect.o 00:01:45.287 CC examples/nvme/hotplug/hotplug.o 00:01:45.287 CC examples/nvme/abort/abort.o 00:01:45.287 CC examples/nvme/arbitration/arbitration.o 00:01:45.287 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:45.287 LINK connect_stress 00:01:45.287 LINK reserve 00:01:45.547 LINK boot_partition 00:01:45.547 LINK nvme_dp 00:01:45.547 LINK aer 00:01:45.547 LINK reset 00:01:45.547 LINK fused_ordering 00:01:45.547 LINK err_injection 00:01:45.547 LINK doorbell_aers 00:01:45.547 LINK memory_ut 00:01:45.547 LINK mkfs 00:01:45.547 LINK nvme_compliance 00:01:45.547 LINK simple_copy 00:01:45.547 LINK fdp 00:01:45.547 CC examples/accel/perf/accel_perf.o 00:01:45.547 LINK sgl 00:01:45.547 LINK overhead 00:01:45.547 CC examples/fsdev/hello_world/hello_fsdev.o 00:01:45.547 CC examples/blob/cli/blobcli.o 00:01:45.547 CC examples/blob/hello_world/hello_blob.o 00:01:45.547 LINK cmb_copy 00:01:45.547 LINK pmr_persistence 00:01:45.547 LINK hello_world 00:01:45.547 LINK arbitration 00:01:45.547 LINK hotplug 00:01:45.547 LINK reconnect 00:01:45.808 LINK abort 00:01:45.808 LINK iscsi_fuzz 00:01:45.808 LINK hello_blob 00:01:45.808 LINK hello_fsdev 00:01:45.808 LINK nvme_manage 00:01:45.808 LINK dif 00:01:45.808 LINK accel_perf 00:01:45.808 LINK blobcli 00:01:46.067 LINK cuse 00:01:46.067 CC test/bdev/bdevio/bdevio.o 00:01:46.326 CC examples/bdev/bdevperf/bdevperf.o 00:01:46.326 CC examples/bdev/hello_world/hello_bdev.o 00:01:46.326 LINK hello_bdev 00:01:46.585 LINK bdevio 00:01:46.585 LINK bdevperf 00:01:46.844 CC examples/nvmf/nvmf/nvmf.o 00:01:47.103 LINK nvmf 00:01:49.004 LINK esnap 00:01:49.004 00:01:49.004 real 0m43.187s 00:01:49.004 user 6m22.883s 00:01:49.004 sys 3m23.108s 00:01:49.004 13:45:28 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:49.004 13:45:28 make -- common/autotest_common.sh@10 -- $ set +x 00:01:49.004 ************************************ 00:01:49.004 END TEST make 00:01:49.004 ************************************ 00:01:49.004 13:45:28 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:49.004 13:45:28 -- pm/common@29 -- $ signal_monitor_resources TERM 00:01:49.004 13:45:28 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:01:49.004 13:45:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.004 13:45:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:49.004 13:45:28 -- pm/common@44 -- $ pid=518017 00:01:49.004 13:45:28 -- pm/common@50 -- $ kill -TERM 518017 00:01:49.004 13:45:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.004 13:45:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:49.004 13:45:28 -- pm/common@44 -- $ pid=518018 00:01:49.004 13:45:28 -- pm/common@50 -- $ kill -TERM 518018 00:01:49.004 13:45:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.004 13:45:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:49.004 13:45:28 -- pm/common@44 -- $ pid=518019 00:01:49.004 13:45:28 -- pm/common@50 -- $ kill -TERM 518019 00:01:49.004 13:45:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.004 13:45:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:49.004 13:45:28 -- pm/common@44 -- $ pid=518045 00:01:49.005 13:45:28 -- pm/common@50 -- $ sudo -E kill -TERM 518045 00:01:49.005 13:45:28 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:01:49.005 13:45:28 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:49.005 13:45:28 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:01:49.005 13:45:28 -- common/autotest_common.sh@1691 -- # lcov --version 00:01:49.005 13:45:28 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:01:49.005 13:45:28 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:01:49.005 13:45:28 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:01:49.005 13:45:28 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:01:49.005 13:45:28 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:01:49.005 13:45:28 -- scripts/common.sh@336 -- # IFS=.-: 00:01:49.005 13:45:28 -- scripts/common.sh@336 -- # read -ra ver1 00:01:49.005 13:45:28 -- scripts/common.sh@337 -- # IFS=.-: 00:01:49.005 13:45:28 -- scripts/common.sh@337 -- # read -ra ver2 00:01:49.005 13:45:28 -- scripts/common.sh@338 -- # local 'op=<' 00:01:49.005 13:45:28 -- scripts/common.sh@340 -- # ver1_l=2 00:01:49.005 13:45:28 -- scripts/common.sh@341 -- # ver2_l=1 00:01:49.005 13:45:28 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:01:49.005 13:45:28 -- scripts/common.sh@344 -- # case "$op" in 00:01:49.005 13:45:28 -- scripts/common.sh@345 -- # : 1 00:01:49.005 13:45:28 -- scripts/common.sh@364 -- # (( v = 0 )) 00:01:49.005 13:45:28 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:49.005 13:45:28 -- scripts/common.sh@365 -- # decimal 1 00:01:49.005 13:45:28 -- scripts/common.sh@353 -- # local d=1 00:01:49.005 13:45:28 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:01:49.005 13:45:28 -- scripts/common.sh@355 -- # echo 1 00:01:49.005 13:45:28 -- scripts/common.sh@365 -- # ver1[v]=1 00:01:49.005 13:45:28 -- scripts/common.sh@366 -- # decimal 2 00:01:49.005 13:45:28 -- scripts/common.sh@353 -- # local d=2 00:01:49.005 13:45:28 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:01:49.005 13:45:28 -- scripts/common.sh@355 -- # echo 2 00:01:49.005 13:45:28 -- scripts/common.sh@366 -- # ver2[v]=2 00:01:49.005 13:45:28 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:01:49.005 13:45:28 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:01:49.005 13:45:28 -- scripts/common.sh@368 -- # return 0 00:01:49.005 13:45:28 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:01:49.005 13:45:28 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:01:49.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:01:49.005 --rc genhtml_branch_coverage=1 00:01:49.005 --rc genhtml_function_coverage=1 00:01:49.005 --rc genhtml_legend=1 00:01:49.005 --rc geninfo_all_blocks=1 00:01:49.005 --rc geninfo_unexecuted_blocks=1 00:01:49.005 00:01:49.005 ' 00:01:49.005 13:45:28 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:01:49.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:01:49.005 --rc genhtml_branch_coverage=1 00:01:49.005 --rc genhtml_function_coverage=1 00:01:49.005 --rc genhtml_legend=1 00:01:49.005 --rc geninfo_all_blocks=1 00:01:49.005 --rc geninfo_unexecuted_blocks=1 00:01:49.005 00:01:49.005 ' 00:01:49.005 13:45:28 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:01:49.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:01:49.005 --rc genhtml_branch_coverage=1 00:01:49.005 --rc genhtml_function_coverage=1 00:01:49.005 --rc genhtml_legend=1 00:01:49.005 --rc geninfo_all_blocks=1 00:01:49.005 --rc geninfo_unexecuted_blocks=1 00:01:49.005 00:01:49.005 ' 00:01:49.005 13:45:28 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:01:49.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:01:49.005 --rc genhtml_branch_coverage=1 00:01:49.005 --rc genhtml_function_coverage=1 00:01:49.005 --rc genhtml_legend=1 00:01:49.005 --rc geninfo_all_blocks=1 00:01:49.005 --rc geninfo_unexecuted_blocks=1 00:01:49.005 00:01:49.005 ' 00:01:49.005 13:45:28 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:49.005 13:45:28 -- nvmf/common.sh@7 -- # uname -s 00:01:49.005 13:45:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:49.005 13:45:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:49.005 13:45:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:49.005 13:45:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:49.005 13:45:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:49.005 13:45:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:49.005 13:45:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:49.005 13:45:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:49.005 13:45:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:49.005 13:45:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:49.005 13:45:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:01:49.005 13:45:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:01:49.005 13:45:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:49.005 13:45:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:49.005 13:45:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:49.005 13:45:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:49.005 13:45:28 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:49.005 13:45:28 -- scripts/common.sh@15 -- # shopt -s extglob 00:01:49.005 13:45:28 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:49.005 13:45:28 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:49.005 13:45:28 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:49.005 13:45:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.005 13:45:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.005 13:45:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.005 13:45:28 -- paths/export.sh@5 -- # export PATH 00:01:49.005 13:45:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.005 13:45:28 -- nvmf/common.sh@51 -- # : 0 00:01:49.005 13:45:28 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:01:49.005 13:45:28 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:01:49.005 13:45:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:49.005 13:45:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:49.005 13:45:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:49.005 13:45:28 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:01:49.005 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:01:49.005 13:45:28 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:01:49.005 13:45:28 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:01:49.005 13:45:28 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:01:49.005 13:45:28 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:49.005 13:45:28 -- spdk/autotest.sh@32 -- # uname -s 00:01:49.005 13:45:28 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:49.005 13:45:28 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:49.005 13:45:28 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:49.005 13:45:28 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:49.005 13:45:28 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:49.005 13:45:28 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:49.005 13:45:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:49.005 13:45:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:49.005 13:45:28 -- spdk/autotest.sh@48 -- # udevadm_pid=581954 00:01:49.005 13:45:28 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:49.005 13:45:28 -- pm/common@17 -- # local monitor 00:01:49.005 13:45:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.005 13:45:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.005 13:45:28 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:49.005 13:45:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.005 13:45:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.005 13:45:28 -- pm/common@25 -- # sleep 1 00:01:49.005 13:45:28 -- pm/common@21 -- # date +%s 00:01:49.005 13:45:28 -- pm/common@21 -- # date +%s 00:01:49.005 13:45:28 -- pm/common@21 -- # date +%s 00:01:49.005 13:45:28 -- pm/common@21 -- # date +%s 00:01:49.005 13:45:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730897128 00:01:49.005 13:45:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730897128 00:01:49.005 13:45:28 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730897128 00:01:49.005 13:45:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730897128 00:01:49.264 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730897128_collect-cpu-temp.pm.log 00:01:49.264 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730897128_collect-vmstat.pm.log 00:01:49.265 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730897128_collect-cpu-load.pm.log 00:01:49.265 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730897128_collect-bmc-pm.bmc.pm.log 00:01:50.201 13:45:29 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:50.201 13:45:29 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:50.201 13:45:29 -- common/autotest_common.sh@724 -- # xtrace_disable 00:01:50.201 13:45:29 -- common/autotest_common.sh@10 -- # set +x 00:01:50.201 13:45:29 -- spdk/autotest.sh@59 -- # create_test_list 00:01:50.201 13:45:29 -- common/autotest_common.sh@750 -- # xtrace_disable 00:01:50.201 13:45:29 -- common/autotest_common.sh@10 -- # set +x 00:01:50.201 13:45:29 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:50.201 13:45:29 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:50.201 13:45:29 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:50.201 13:45:29 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:50.201 13:45:29 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:50.201 13:45:29 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:50.201 13:45:29 -- common/autotest_common.sh@1455 -- # uname 00:01:50.201 13:45:29 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:01:50.201 13:45:29 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:50.201 13:45:29 -- common/autotest_common.sh@1475 -- # uname 00:01:50.201 13:45:29 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:01:50.201 13:45:29 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:01:50.201 13:45:29 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:01:50.201 lcov: LCOV version 1.15 00:01:50.201 13:45:29 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:05.075 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:05.075 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:10.346 13:45:48 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:02:10.346 13:45:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:10.346 13:45:48 -- common/autotest_common.sh@10 -- # set +x 00:02:10.346 13:45:48 -- spdk/autotest.sh@78 -- # rm -f 00:02:10.346 13:45:48 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:11.724 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:02:11.724 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:02:11.724 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:02:11.724 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:02:11.724 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:02:11.724 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:02:11.985 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:02:11.985 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:02:11.985 0000:65:00.0 (144d a80a): Already using the nvme driver 00:02:11.985 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:02:11.985 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:02:11.985 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:02:11.985 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:02:11.985 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:02:11.985 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:02:11.985 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:02:11.985 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:02:11.985 13:45:51 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:02:11.985 13:45:51 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:11.985 13:45:51 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:11.985 13:45:51 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:11.985 13:45:51 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:11.985 13:45:51 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:11.985 13:45:51 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:11.985 13:45:51 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:11.985 13:45:51 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:11.985 13:45:51 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:02:11.985 13:45:51 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:02:11.985 13:45:51 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:02:11.985 13:45:51 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:02:11.985 13:45:51 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:02:11.985 13:45:51 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:11.985 No valid GPT data, bailing 00:02:11.985 13:45:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:11.985 13:45:51 -- scripts/common.sh@394 -- # pt= 00:02:11.985 13:45:51 -- scripts/common.sh@395 -- # return 1 00:02:11.985 13:45:51 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:11.985 1+0 records in 00:02:11.985 1+0 records out 00:02:11.985 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00175238 s, 598 MB/s 00:02:11.985 13:45:51 -- spdk/autotest.sh@105 -- # sync 00:02:11.985 13:45:51 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:11.985 13:45:51 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:11.985 13:45:51 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:17.261 13:45:56 -- spdk/autotest.sh@111 -- # uname -s 00:02:17.261 13:45:56 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:02:17.261 13:45:56 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:02:17.261 13:45:56 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:19.797 Hugepages 00:02:19.797 node hugesize free / total 00:02:19.797 node0 1048576kB 0 / 0 00:02:19.797 node0 2048kB 0 / 0 00:02:19.797 node1 1048576kB 0 / 0 00:02:19.797 node1 2048kB 0 / 0 00:02:19.797 00:02:19.797 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:19.797 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:02:19.797 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:02:19.797 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:02:19.797 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:02:19.797 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:02:19.797 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:02:19.797 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:02:19.797 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:02:19.797 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:02:19.797 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:02:19.797 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:02:19.797 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:02:19.797 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:02:19.797 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:02:19.797 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:02:19.797 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:02:19.797 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:02:19.797 13:45:58 -- spdk/autotest.sh@117 -- # uname -s 00:02:19.797 13:45:58 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:02:19.797 13:45:58 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:02:19.797 13:45:58 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:22.334 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:02:22.334 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:02:22.334 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:02:22.334 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:02:22.334 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:02:22.334 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:02:22.334 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:02:22.334 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:02:22.334 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:02:22.334 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:02:22.334 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:02:22.334 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:02:22.334 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:02:22.334 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:02:22.334 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:02:22.334 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:02:23.716 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:02:23.975 13:46:03 -- common/autotest_common.sh@1515 -- # sleep 1 00:02:24.915 13:46:04 -- common/autotest_common.sh@1516 -- # bdfs=() 00:02:24.915 13:46:04 -- common/autotest_common.sh@1516 -- # local bdfs 00:02:24.915 13:46:04 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:02:24.915 13:46:04 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:02:24.915 13:46:04 -- common/autotest_common.sh@1496 -- # bdfs=() 00:02:24.915 13:46:04 -- common/autotest_common.sh@1496 -- # local bdfs 00:02:24.915 13:46:04 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:02:24.915 13:46:04 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:02:24.915 13:46:04 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:02:24.915 13:46:04 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:02:24.915 13:46:04 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:02:24.915 13:46:04 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:27.453 Waiting for block devices as requested 00:02:27.453 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:02:27.453 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:02:27.453 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:02:27.453 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:02:27.712 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:02:27.712 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:02:27.712 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:02:27.712 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:02:27.972 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:02:27.972 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:02:27.972 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:02:28.232 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:02:28.232 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:02:28.232 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:02:28.232 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:02:28.491 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:02:28.491 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:02:28.491 13:46:07 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:02:28.491 13:46:07 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:02:28.492 13:46:07 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:02:28.492 13:46:07 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:02:28.492 13:46:07 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:02:28.492 13:46:07 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:02:28.492 13:46:07 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:02:28.492 13:46:07 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:02:28.492 13:46:07 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:02:28.492 13:46:07 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:02:28.492 13:46:07 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:02:28.492 13:46:07 -- common/autotest_common.sh@1529 -- # grep oacs 00:02:28.492 13:46:07 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:02:28.492 13:46:07 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:02:28.492 13:46:07 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:02:28.492 13:46:07 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:02:28.492 13:46:07 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:02:28.492 13:46:07 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:02:28.492 13:46:07 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:02:28.492 13:46:07 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:02:28.492 13:46:07 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:02:28.492 13:46:07 -- common/autotest_common.sh@1541 -- # continue 00:02:28.492 13:46:07 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:02:28.492 13:46:07 -- common/autotest_common.sh@730 -- # xtrace_disable 00:02:28.492 13:46:07 -- common/autotest_common.sh@10 -- # set +x 00:02:28.492 13:46:07 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:02:28.492 13:46:07 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:28.492 13:46:07 -- common/autotest_common.sh@10 -- # set +x 00:02:28.492 13:46:07 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:31.031 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:02:31.031 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:02:31.031 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:02:31.031 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:02:31.031 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:02:31.031 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:02:31.031 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:02:31.031 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:02:31.031 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:02:31.031 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:02:31.031 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:02:31.031 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:02:31.031 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:02:31.031 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:02:31.031 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:02:31.031 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:02:31.031 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:02:31.031 13:46:10 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:02:31.031 13:46:10 -- common/autotest_common.sh@730 -- # xtrace_disable 00:02:31.031 13:46:10 -- common/autotest_common.sh@10 -- # set +x 00:02:31.291 13:46:10 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:02:31.291 13:46:10 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:02:31.291 13:46:10 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:02:31.291 13:46:10 -- common/autotest_common.sh@1561 -- # bdfs=() 00:02:31.291 13:46:10 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:02:31.291 13:46:10 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:02:31.291 13:46:10 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:02:31.291 13:46:10 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:02:31.291 13:46:10 -- common/autotest_common.sh@1496 -- # bdfs=() 00:02:31.291 13:46:10 -- common/autotest_common.sh@1496 -- # local bdfs 00:02:31.291 13:46:10 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:02:31.291 13:46:10 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:02:31.291 13:46:10 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:02:31.291 13:46:10 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:02:31.291 13:46:10 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:02:31.291 13:46:10 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:02:31.291 13:46:10 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:02:31.291 13:46:10 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:02:31.291 13:46:10 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:02:31.291 13:46:10 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:02:31.291 13:46:10 -- common/autotest_common.sh@1570 -- # return 0 00:02:31.291 13:46:10 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:02:31.291 13:46:10 -- common/autotest_common.sh@1578 -- # return 0 00:02:31.291 13:46:10 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:02:31.291 13:46:10 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:02:31.291 13:46:10 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:02:31.291 13:46:10 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:02:31.291 13:46:10 -- spdk/autotest.sh@149 -- # timing_enter lib 00:02:31.291 13:46:10 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:31.291 13:46:10 -- common/autotest_common.sh@10 -- # set +x 00:02:31.291 13:46:10 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:02:31.291 13:46:10 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:02:31.291 13:46:10 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:02:31.291 13:46:10 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:02:31.291 13:46:10 -- common/autotest_common.sh@10 -- # set +x 00:02:31.291 ************************************ 00:02:31.291 START TEST env 00:02:31.291 ************************************ 00:02:31.291 13:46:10 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:02:31.291 * Looking for test storage... 00:02:31.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:02:31.291 13:46:10 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:02:31.291 13:46:10 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:02:31.291 13:46:10 env -- common/autotest_common.sh@1691 -- # lcov --version 00:02:31.291 13:46:10 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:02:31.291 13:46:10 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:31.291 13:46:10 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:31.291 13:46:10 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:31.291 13:46:10 env -- scripts/common.sh@336 -- # IFS=.-: 00:02:31.291 13:46:10 env -- scripts/common.sh@336 -- # read -ra ver1 00:02:31.291 13:46:10 env -- scripts/common.sh@337 -- # IFS=.-: 00:02:31.291 13:46:10 env -- scripts/common.sh@337 -- # read -ra ver2 00:02:31.291 13:46:10 env -- scripts/common.sh@338 -- # local 'op=<' 00:02:31.291 13:46:10 env -- scripts/common.sh@340 -- # ver1_l=2 00:02:31.291 13:46:10 env -- scripts/common.sh@341 -- # ver2_l=1 00:02:31.291 13:46:10 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:31.291 13:46:10 env -- scripts/common.sh@344 -- # case "$op" in 00:02:31.291 13:46:10 env -- scripts/common.sh@345 -- # : 1 00:02:31.291 13:46:10 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:31.291 13:46:10 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:31.291 13:46:10 env -- scripts/common.sh@365 -- # decimal 1 00:02:31.291 13:46:10 env -- scripts/common.sh@353 -- # local d=1 00:02:31.291 13:46:10 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:31.291 13:46:10 env -- scripts/common.sh@355 -- # echo 1 00:02:31.291 13:46:10 env -- scripts/common.sh@365 -- # ver1[v]=1 00:02:31.291 13:46:10 env -- scripts/common.sh@366 -- # decimal 2 00:02:31.291 13:46:10 env -- scripts/common.sh@353 -- # local d=2 00:02:31.291 13:46:10 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:31.291 13:46:10 env -- scripts/common.sh@355 -- # echo 2 00:02:31.291 13:46:10 env -- scripts/common.sh@366 -- # ver2[v]=2 00:02:31.291 13:46:10 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:31.291 13:46:10 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:31.291 13:46:10 env -- scripts/common.sh@368 -- # return 0 00:02:31.291 13:46:10 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:31.291 13:46:10 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:02:31.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:31.291 --rc genhtml_branch_coverage=1 00:02:31.291 --rc genhtml_function_coverage=1 00:02:31.291 --rc genhtml_legend=1 00:02:31.291 --rc geninfo_all_blocks=1 00:02:31.291 --rc geninfo_unexecuted_blocks=1 00:02:31.291 00:02:31.291 ' 00:02:31.291 13:46:10 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:02:31.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:31.291 --rc genhtml_branch_coverage=1 00:02:31.291 --rc genhtml_function_coverage=1 00:02:31.291 --rc genhtml_legend=1 00:02:31.291 --rc geninfo_all_blocks=1 00:02:31.291 --rc geninfo_unexecuted_blocks=1 00:02:31.291 00:02:31.291 ' 00:02:31.291 13:46:10 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:02:31.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:31.291 --rc genhtml_branch_coverage=1 00:02:31.291 --rc genhtml_function_coverage=1 00:02:31.291 --rc genhtml_legend=1 00:02:31.292 --rc geninfo_all_blocks=1 00:02:31.292 --rc geninfo_unexecuted_blocks=1 00:02:31.292 00:02:31.292 ' 00:02:31.292 13:46:10 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:02:31.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:31.292 --rc genhtml_branch_coverage=1 00:02:31.292 --rc genhtml_function_coverage=1 00:02:31.292 --rc genhtml_legend=1 00:02:31.292 --rc geninfo_all_blocks=1 00:02:31.292 --rc geninfo_unexecuted_blocks=1 00:02:31.292 00:02:31.292 ' 00:02:31.292 13:46:10 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:02:31.292 13:46:10 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:02:31.292 13:46:10 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:02:31.292 13:46:10 env -- common/autotest_common.sh@10 -- # set +x 00:02:31.552 ************************************ 00:02:31.552 START TEST env_memory 00:02:31.552 ************************************ 00:02:31.552 13:46:10 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:02:31.552 00:02:31.552 00:02:31.552 CUnit - A unit testing framework for C - Version 2.1-3 00:02:31.552 http://cunit.sourceforge.net/ 00:02:31.552 00:02:31.552 00:02:31.552 Suite: memory 00:02:31.552 Test: alloc and free memory map ...[2024-11-06 13:46:10.610782] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:02:31.552 passed 00:02:31.552 Test: mem map translation ...[2024-11-06 13:46:10.636381] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:02:31.552 [2024-11-06 13:46:10.636414] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:02:31.552 [2024-11-06 13:46:10.636462] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:02:31.552 [2024-11-06 13:46:10.636470] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:02:31.552 passed 00:02:31.552 Test: mem map registration ...[2024-11-06 13:46:10.691852] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:02:31.552 [2024-11-06 13:46:10.691886] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:02:31.552 passed 00:02:31.552 Test: mem map adjacent registrations ...passed 00:02:31.552 00:02:31.552 Run Summary: Type Total Ran Passed Failed Inactive 00:02:31.552 suites 1 1 n/a 0 0 00:02:31.552 tests 4 4 4 0 0 00:02:31.552 asserts 152 152 152 0 n/a 00:02:31.552 00:02:31.552 Elapsed time = 0.182 seconds 00:02:31.552 00:02:31.552 real 0m0.191s 00:02:31.552 user 0m0.183s 00:02:31.552 sys 0m0.007s 00:02:31.552 13:46:10 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:02:31.552 13:46:10 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:02:31.552 ************************************ 00:02:31.552 END TEST env_memory 00:02:31.552 ************************************ 00:02:31.552 13:46:10 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:02:31.552 13:46:10 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:02:31.552 13:46:10 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:02:31.552 13:46:10 env -- common/autotest_common.sh@10 -- # set +x 00:02:31.552 ************************************ 00:02:31.552 START TEST env_vtophys 00:02:31.552 ************************************ 00:02:31.553 13:46:10 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:02:31.553 EAL: lib.eal log level changed from notice to debug 00:02:31.553 EAL: Detected lcore 0 as core 0 on socket 0 00:02:31.553 EAL: Detected lcore 1 as core 1 on socket 0 00:02:31.553 EAL: Detected lcore 2 as core 2 on socket 0 00:02:31.553 EAL: Detected lcore 3 as core 3 on socket 0 00:02:31.553 EAL: Detected lcore 4 as core 4 on socket 0 00:02:31.553 EAL: Detected lcore 5 as core 5 on socket 0 00:02:31.553 EAL: Detected lcore 6 as core 6 on socket 0 00:02:31.553 EAL: Detected lcore 7 as core 7 on socket 0 00:02:31.553 EAL: Detected lcore 8 as core 8 on socket 0 00:02:31.553 EAL: Detected lcore 9 as core 9 on socket 0 00:02:31.553 EAL: Detected lcore 10 as core 10 on socket 0 00:02:31.553 EAL: Detected lcore 11 as core 11 on socket 0 00:02:31.553 EAL: Detected lcore 12 as core 12 on socket 0 00:02:31.553 EAL: Detected lcore 13 as core 13 on socket 0 00:02:31.553 EAL: Detected lcore 14 as core 14 on socket 0 00:02:31.553 EAL: Detected lcore 15 as core 15 on socket 0 00:02:31.553 EAL: Detected lcore 16 as core 16 on socket 0 00:02:31.553 EAL: Detected lcore 17 as core 17 on socket 0 00:02:31.553 EAL: Detected lcore 18 as core 18 on socket 0 00:02:31.553 EAL: Detected lcore 19 as core 19 on socket 0 00:02:31.553 EAL: Detected lcore 20 as core 20 on socket 0 00:02:31.553 EAL: Detected lcore 21 as core 21 on socket 0 00:02:31.553 EAL: Detected lcore 22 as core 22 on socket 0 00:02:31.553 EAL: Detected lcore 23 as core 23 on socket 0 00:02:31.553 EAL: Detected lcore 24 as core 24 on socket 0 00:02:31.553 EAL: Detected lcore 25 as core 25 on socket 0 00:02:31.553 EAL: Detected lcore 26 as core 26 on socket 0 00:02:31.553 EAL: Detected lcore 27 as core 27 on socket 0 00:02:31.553 EAL: Detected lcore 28 as core 28 on socket 0 00:02:31.553 EAL: Detected lcore 29 as core 29 on socket 0 00:02:31.553 EAL: Detected lcore 30 as core 30 on socket 0 00:02:31.553 EAL: Detected lcore 31 as core 31 on socket 0 00:02:31.553 EAL: Detected lcore 32 as core 32 on socket 0 00:02:31.553 EAL: Detected lcore 33 as core 33 on socket 0 00:02:31.553 EAL: Detected lcore 34 as core 34 on socket 0 00:02:31.553 EAL: Detected lcore 35 as core 35 on socket 0 00:02:31.553 EAL: Detected lcore 36 as core 0 on socket 1 00:02:31.553 EAL: Detected lcore 37 as core 1 on socket 1 00:02:31.553 EAL: Detected lcore 38 as core 2 on socket 1 00:02:31.553 EAL: Detected lcore 39 as core 3 on socket 1 00:02:31.553 EAL: Detected lcore 40 as core 4 on socket 1 00:02:31.553 EAL: Detected lcore 41 as core 5 on socket 1 00:02:31.553 EAL: Detected lcore 42 as core 6 on socket 1 00:02:31.553 EAL: Detected lcore 43 as core 7 on socket 1 00:02:31.553 EAL: Detected lcore 44 as core 8 on socket 1 00:02:31.553 EAL: Detected lcore 45 as core 9 on socket 1 00:02:31.553 EAL: Detected lcore 46 as core 10 on socket 1 00:02:31.553 EAL: Detected lcore 47 as core 11 on socket 1 00:02:31.553 EAL: Detected lcore 48 as core 12 on socket 1 00:02:31.553 EAL: Detected lcore 49 as core 13 on socket 1 00:02:31.553 EAL: Detected lcore 50 as core 14 on socket 1 00:02:31.553 EAL: Detected lcore 51 as core 15 on socket 1 00:02:31.553 EAL: Detected lcore 52 as core 16 on socket 1 00:02:31.553 EAL: Detected lcore 53 as core 17 on socket 1 00:02:31.553 EAL: Detected lcore 54 as core 18 on socket 1 00:02:31.553 EAL: Detected lcore 55 as core 19 on socket 1 00:02:31.553 EAL: Detected lcore 56 as core 20 on socket 1 00:02:31.553 EAL: Detected lcore 57 as core 21 on socket 1 00:02:31.553 EAL: Detected lcore 58 as core 22 on socket 1 00:02:31.553 EAL: Detected lcore 59 as core 23 on socket 1 00:02:31.553 EAL: Detected lcore 60 as core 24 on socket 1 00:02:31.553 EAL: Detected lcore 61 as core 25 on socket 1 00:02:31.553 EAL: Detected lcore 62 as core 26 on socket 1 00:02:31.553 EAL: Detected lcore 63 as core 27 on socket 1 00:02:31.553 EAL: Detected lcore 64 as core 28 on socket 1 00:02:31.553 EAL: Detected lcore 65 as core 29 on socket 1 00:02:31.553 EAL: Detected lcore 66 as core 30 on socket 1 00:02:31.553 EAL: Detected lcore 67 as core 31 on socket 1 00:02:31.553 EAL: Detected lcore 68 as core 32 on socket 1 00:02:31.553 EAL: Detected lcore 69 as core 33 on socket 1 00:02:31.553 EAL: Detected lcore 70 as core 34 on socket 1 00:02:31.553 EAL: Detected lcore 71 as core 35 on socket 1 00:02:31.553 EAL: Detected lcore 72 as core 0 on socket 0 00:02:31.553 EAL: Detected lcore 73 as core 1 on socket 0 00:02:31.553 EAL: Detected lcore 74 as core 2 on socket 0 00:02:31.553 EAL: Detected lcore 75 as core 3 on socket 0 00:02:31.553 EAL: Detected lcore 76 as core 4 on socket 0 00:02:31.553 EAL: Detected lcore 77 as core 5 on socket 0 00:02:31.553 EAL: Detected lcore 78 as core 6 on socket 0 00:02:31.553 EAL: Detected lcore 79 as core 7 on socket 0 00:02:31.553 EAL: Detected lcore 80 as core 8 on socket 0 00:02:31.553 EAL: Detected lcore 81 as core 9 on socket 0 00:02:31.553 EAL: Detected lcore 82 as core 10 on socket 0 00:02:31.553 EAL: Detected lcore 83 as core 11 on socket 0 00:02:31.553 EAL: Detected lcore 84 as core 12 on socket 0 00:02:31.553 EAL: Detected lcore 85 as core 13 on socket 0 00:02:31.553 EAL: Detected lcore 86 as core 14 on socket 0 00:02:31.553 EAL: Detected lcore 87 as core 15 on socket 0 00:02:31.553 EAL: Detected lcore 88 as core 16 on socket 0 00:02:31.553 EAL: Detected lcore 89 as core 17 on socket 0 00:02:31.553 EAL: Detected lcore 90 as core 18 on socket 0 00:02:31.553 EAL: Detected lcore 91 as core 19 on socket 0 00:02:31.553 EAL: Detected lcore 92 as core 20 on socket 0 00:02:31.553 EAL: Detected lcore 93 as core 21 on socket 0 00:02:31.553 EAL: Detected lcore 94 as core 22 on socket 0 00:02:31.553 EAL: Detected lcore 95 as core 23 on socket 0 00:02:31.553 EAL: Detected lcore 96 as core 24 on socket 0 00:02:31.553 EAL: Detected lcore 97 as core 25 on socket 0 00:02:31.553 EAL: Detected lcore 98 as core 26 on socket 0 00:02:31.553 EAL: Detected lcore 99 as core 27 on socket 0 00:02:31.553 EAL: Detected lcore 100 as core 28 on socket 0 00:02:31.553 EAL: Detected lcore 101 as core 29 on socket 0 00:02:31.553 EAL: Detected lcore 102 as core 30 on socket 0 00:02:31.553 EAL: Detected lcore 103 as core 31 on socket 0 00:02:31.553 EAL: Detected lcore 104 as core 32 on socket 0 00:02:31.553 EAL: Detected lcore 105 as core 33 on socket 0 00:02:31.553 EAL: Detected lcore 106 as core 34 on socket 0 00:02:31.553 EAL: Detected lcore 107 as core 35 on socket 0 00:02:31.553 EAL: Detected lcore 108 as core 0 on socket 1 00:02:31.553 EAL: Detected lcore 109 as core 1 on socket 1 00:02:31.553 EAL: Detected lcore 110 as core 2 on socket 1 00:02:31.553 EAL: Detected lcore 111 as core 3 on socket 1 00:02:31.553 EAL: Detected lcore 112 as core 4 on socket 1 00:02:31.553 EAL: Detected lcore 113 as core 5 on socket 1 00:02:31.553 EAL: Detected lcore 114 as core 6 on socket 1 00:02:31.553 EAL: Detected lcore 115 as core 7 on socket 1 00:02:31.553 EAL: Detected lcore 116 as core 8 on socket 1 00:02:31.553 EAL: Detected lcore 117 as core 9 on socket 1 00:02:31.553 EAL: Detected lcore 118 as core 10 on socket 1 00:02:31.553 EAL: Detected lcore 119 as core 11 on socket 1 00:02:31.553 EAL: Detected lcore 120 as core 12 on socket 1 00:02:31.553 EAL: Detected lcore 121 as core 13 on socket 1 00:02:31.553 EAL: Detected lcore 122 as core 14 on socket 1 00:02:31.553 EAL: Detected lcore 123 as core 15 on socket 1 00:02:31.553 EAL: Detected lcore 124 as core 16 on socket 1 00:02:31.553 EAL: Detected lcore 125 as core 17 on socket 1 00:02:31.553 EAL: Detected lcore 126 as core 18 on socket 1 00:02:31.553 EAL: Detected lcore 127 as core 19 on socket 1 00:02:31.553 EAL: Skipped lcore 128 as core 20 on socket 1 00:02:31.553 EAL: Skipped lcore 129 as core 21 on socket 1 00:02:31.553 EAL: Skipped lcore 130 as core 22 on socket 1 00:02:31.553 EAL: Skipped lcore 131 as core 23 on socket 1 00:02:31.553 EAL: Skipped lcore 132 as core 24 on socket 1 00:02:31.553 EAL: Skipped lcore 133 as core 25 on socket 1 00:02:31.553 EAL: Skipped lcore 134 as core 26 on socket 1 00:02:31.553 EAL: Skipped lcore 135 as core 27 on socket 1 00:02:31.553 EAL: Skipped lcore 136 as core 28 on socket 1 00:02:31.553 EAL: Skipped lcore 137 as core 29 on socket 1 00:02:31.553 EAL: Skipped lcore 138 as core 30 on socket 1 00:02:31.553 EAL: Skipped lcore 139 as core 31 on socket 1 00:02:31.553 EAL: Skipped lcore 140 as core 32 on socket 1 00:02:31.553 EAL: Skipped lcore 141 as core 33 on socket 1 00:02:31.553 EAL: Skipped lcore 142 as core 34 on socket 1 00:02:31.553 EAL: Skipped lcore 143 as core 35 on socket 1 00:02:31.553 EAL: Maximum logical cores by configuration: 128 00:02:31.553 EAL: Detected CPU lcores: 128 00:02:31.553 EAL: Detected NUMA nodes: 2 00:02:31.553 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:02:31.553 EAL: Detected shared linkage of DPDK 00:02:31.814 EAL: No shared files mode enabled, IPC will be disabled 00:02:31.814 EAL: Bus pci wants IOVA as 'DC' 00:02:31.814 EAL: Buses did not request a specific IOVA mode. 00:02:31.814 EAL: IOMMU is available, selecting IOVA as VA mode. 00:02:31.814 EAL: Selected IOVA mode 'VA' 00:02:31.814 EAL: Probing VFIO support... 00:02:31.814 EAL: IOMMU type 1 (Type 1) is supported 00:02:31.814 EAL: IOMMU type 7 (sPAPR) is not supported 00:02:31.814 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:02:31.814 EAL: VFIO support initialized 00:02:31.814 EAL: Ask a virtual area of 0x2e000 bytes 00:02:31.814 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:02:31.814 EAL: Setting up physically contiguous memory... 00:02:31.814 EAL: Setting maximum number of open files to 524288 00:02:31.814 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:02:31.814 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:02:31.814 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:02:31.814 EAL: Ask a virtual area of 0x61000 bytes 00:02:31.814 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:02:31.814 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:02:31.814 EAL: Ask a virtual area of 0x400000000 bytes 00:02:31.814 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:02:31.814 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:02:31.814 EAL: Ask a virtual area of 0x61000 bytes 00:02:31.814 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:02:31.814 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:02:31.814 EAL: Ask a virtual area of 0x400000000 bytes 00:02:31.814 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:02:31.814 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:02:31.814 EAL: Ask a virtual area of 0x61000 bytes 00:02:31.814 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:02:31.814 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:02:31.814 EAL: Ask a virtual area of 0x400000000 bytes 00:02:31.814 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:02:31.814 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:02:31.814 EAL: Ask a virtual area of 0x61000 bytes 00:02:31.814 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:02:31.814 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:02:31.814 EAL: Ask a virtual area of 0x400000000 bytes 00:02:31.814 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:02:31.814 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:02:31.814 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:02:31.814 EAL: Ask a virtual area of 0x61000 bytes 00:02:31.814 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:02:31.814 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:02:31.814 EAL: Ask a virtual area of 0x400000000 bytes 00:02:31.814 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:02:31.814 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:02:31.814 EAL: Ask a virtual area of 0x61000 bytes 00:02:31.814 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:02:31.814 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:02:31.814 EAL: Ask a virtual area of 0x400000000 bytes 00:02:31.814 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:02:31.814 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:02:31.814 EAL: Ask a virtual area of 0x61000 bytes 00:02:31.814 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:02:31.814 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:02:31.814 EAL: Ask a virtual area of 0x400000000 bytes 00:02:31.814 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:02:31.814 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:02:31.814 EAL: Ask a virtual area of 0x61000 bytes 00:02:31.814 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:02:31.814 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:02:31.814 EAL: Ask a virtual area of 0x400000000 bytes 00:02:31.814 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:02:31.814 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:02:31.814 EAL: Hugepages will be freed exactly as allocated. 00:02:31.814 EAL: No shared files mode enabled, IPC is disabled 00:02:31.814 EAL: No shared files mode enabled, IPC is disabled 00:02:31.814 EAL: TSC frequency is ~2400000 KHz 00:02:31.814 EAL: Main lcore 0 is ready (tid=7f3d610e8a00;cpuset=[0]) 00:02:31.814 EAL: Trying to obtain current memory policy. 00:02:31.814 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:31.814 EAL: Restoring previous memory policy: 0 00:02:31.814 EAL: request: mp_malloc_sync 00:02:31.814 EAL: No shared files mode enabled, IPC is disabled 00:02:31.814 EAL: Heap on socket 0 was expanded by 2MB 00:02:31.814 EAL: No shared files mode enabled, IPC is disabled 00:02:31.814 EAL: No PCI address specified using 'addr=' in: bus=pci 00:02:31.814 EAL: Mem event callback 'spdk:(nil)' registered 00:02:31.814 00:02:31.814 00:02:31.814 CUnit - A unit testing framework for C - Version 2.1-3 00:02:31.814 http://cunit.sourceforge.net/ 00:02:31.814 00:02:31.814 00:02:31.814 Suite: components_suite 00:02:31.814 Test: vtophys_malloc_test ...passed 00:02:31.814 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:02:31.814 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:31.814 EAL: Restoring previous memory policy: 4 00:02:31.814 EAL: Calling mem event callback 'spdk:(nil)' 00:02:31.814 EAL: request: mp_malloc_sync 00:02:31.814 EAL: No shared files mode enabled, IPC is disabled 00:02:31.814 EAL: Heap on socket 0 was expanded by 4MB 00:02:31.814 EAL: Calling mem event callback 'spdk:(nil)' 00:02:31.814 EAL: request: mp_malloc_sync 00:02:31.814 EAL: No shared files mode enabled, IPC is disabled 00:02:31.814 EAL: Heap on socket 0 was shrunk by 4MB 00:02:31.814 EAL: Trying to obtain current memory policy. 00:02:31.814 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:31.814 EAL: Restoring previous memory policy: 4 00:02:31.814 EAL: Calling mem event callback 'spdk:(nil)' 00:02:31.814 EAL: request: mp_malloc_sync 00:02:31.814 EAL: No shared files mode enabled, IPC is disabled 00:02:31.814 EAL: Heap on socket 0 was expanded by 6MB 00:02:31.814 EAL: Calling mem event callback 'spdk:(nil)' 00:02:31.814 EAL: request: mp_malloc_sync 00:02:31.814 EAL: No shared files mode enabled, IPC is disabled 00:02:31.814 EAL: Heap on socket 0 was shrunk by 6MB 00:02:31.814 EAL: Trying to obtain current memory policy. 00:02:31.814 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:31.814 EAL: Restoring previous memory policy: 4 00:02:31.814 EAL: Calling mem event callback 'spdk:(nil)' 00:02:31.814 EAL: request: mp_malloc_sync 00:02:31.814 EAL: No shared files mode enabled, IPC is disabled 00:02:31.814 EAL: Heap on socket 0 was expanded by 10MB 00:02:31.814 EAL: Calling mem event callback 'spdk:(nil)' 00:02:31.814 EAL: request: mp_malloc_sync 00:02:31.814 EAL: No shared files mode enabled, IPC is disabled 00:02:31.814 EAL: Heap on socket 0 was shrunk by 10MB 00:02:31.814 EAL: Trying to obtain current memory policy. 00:02:31.814 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:31.814 EAL: Restoring previous memory policy: 4 00:02:31.814 EAL: Calling mem event callback 'spdk:(nil)' 00:02:31.814 EAL: request: mp_malloc_sync 00:02:31.815 EAL: No shared files mode enabled, IPC is disabled 00:02:31.815 EAL: Heap on socket 0 was expanded by 18MB 00:02:31.815 EAL: Calling mem event callback 'spdk:(nil)' 00:02:31.815 EAL: request: mp_malloc_sync 00:02:31.815 EAL: No shared files mode enabled, IPC is disabled 00:02:31.815 EAL: Heap on socket 0 was shrunk by 18MB 00:02:31.815 EAL: Trying to obtain current memory policy. 00:02:31.815 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:31.815 EAL: Restoring previous memory policy: 4 00:02:31.815 EAL: Calling mem event callback 'spdk:(nil)' 00:02:31.815 EAL: request: mp_malloc_sync 00:02:31.815 EAL: No shared files mode enabled, IPC is disabled 00:02:31.815 EAL: Heap on socket 0 was expanded by 34MB 00:02:31.815 EAL: Calling mem event callback 'spdk:(nil)' 00:02:31.815 EAL: request: mp_malloc_sync 00:02:31.815 EAL: No shared files mode enabled, IPC is disabled 00:02:31.815 EAL: Heap on socket 0 was shrunk by 34MB 00:02:31.815 EAL: Trying to obtain current memory policy. 00:02:31.815 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:31.815 EAL: Restoring previous memory policy: 4 00:02:31.815 EAL: Calling mem event callback 'spdk:(nil)' 00:02:31.815 EAL: request: mp_malloc_sync 00:02:31.815 EAL: No shared files mode enabled, IPC is disabled 00:02:31.815 EAL: Heap on socket 0 was expanded by 66MB 00:02:31.815 EAL: Calling mem event callback 'spdk:(nil)' 00:02:31.815 EAL: request: mp_malloc_sync 00:02:31.815 EAL: No shared files mode enabled, IPC is disabled 00:02:31.815 EAL: Heap on socket 0 was shrunk by 66MB 00:02:31.815 EAL: Trying to obtain current memory policy. 00:02:31.815 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:31.815 EAL: Restoring previous memory policy: 4 00:02:31.815 EAL: Calling mem event callback 'spdk:(nil)' 00:02:31.815 EAL: request: mp_malloc_sync 00:02:31.815 EAL: No shared files mode enabled, IPC is disabled 00:02:31.815 EAL: Heap on socket 0 was expanded by 130MB 00:02:31.815 EAL: Calling mem event callback 'spdk:(nil)' 00:02:31.815 EAL: request: mp_malloc_sync 00:02:31.815 EAL: No shared files mode enabled, IPC is disabled 00:02:31.815 EAL: Heap on socket 0 was shrunk by 130MB 00:02:31.815 EAL: Trying to obtain current memory policy. 00:02:31.815 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:31.815 EAL: Restoring previous memory policy: 4 00:02:31.815 EAL: Calling mem event callback 'spdk:(nil)' 00:02:31.815 EAL: request: mp_malloc_sync 00:02:31.815 EAL: No shared files mode enabled, IPC is disabled 00:02:31.815 EAL: Heap on socket 0 was expanded by 258MB 00:02:31.815 EAL: Calling mem event callback 'spdk:(nil)' 00:02:32.122 EAL: request: mp_malloc_sync 00:02:32.122 EAL: No shared files mode enabled, IPC is disabled 00:02:32.122 EAL: Heap on socket 0 was shrunk by 258MB 00:02:32.122 EAL: Trying to obtain current memory policy. 00:02:32.122 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:32.122 EAL: Restoring previous memory policy: 4 00:02:32.122 EAL: Calling mem event callback 'spdk:(nil)' 00:02:32.122 EAL: request: mp_malloc_sync 00:02:32.122 EAL: No shared files mode enabled, IPC is disabled 00:02:32.122 EAL: Heap on socket 0 was expanded by 514MB 00:02:32.122 EAL: Calling mem event callback 'spdk:(nil)' 00:02:32.122 EAL: request: mp_malloc_sync 00:02:32.122 EAL: No shared files mode enabled, IPC is disabled 00:02:32.122 EAL: Heap on socket 0 was shrunk by 514MB 00:02:32.122 EAL: Trying to obtain current memory policy. 00:02:32.122 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:32.382 EAL: Restoring previous memory policy: 4 00:02:32.382 EAL: Calling mem event callback 'spdk:(nil)' 00:02:32.382 EAL: request: mp_malloc_sync 00:02:32.382 EAL: No shared files mode enabled, IPC is disabled 00:02:32.382 EAL: Heap on socket 0 was expanded by 1026MB 00:02:32.382 EAL: Calling mem event callback 'spdk:(nil)' 00:02:32.382 EAL: request: mp_malloc_sync 00:02:32.382 EAL: No shared files mode enabled, IPC is disabled 00:02:32.382 EAL: Heap on socket 0 was shrunk by 1026MB 00:02:32.382 passed 00:02:32.382 00:02:32.382 Run Summary: Type Total Ran Passed Failed Inactive 00:02:32.382 suites 1 1 n/a 0 0 00:02:32.382 tests 2 2 2 0 0 00:02:32.382 asserts 497 497 497 0 n/a 00:02:32.382 00:02:32.382 Elapsed time = 0.689 seconds 00:02:32.382 EAL: Calling mem event callback 'spdk:(nil)' 00:02:32.382 EAL: request: mp_malloc_sync 00:02:32.382 EAL: No shared files mode enabled, IPC is disabled 00:02:32.382 EAL: Heap on socket 0 was shrunk by 2MB 00:02:32.382 EAL: No shared files mode enabled, IPC is disabled 00:02:32.382 EAL: No shared files mode enabled, IPC is disabled 00:02:32.382 EAL: No shared files mode enabled, IPC is disabled 00:02:32.382 00:02:32.382 real 0m0.817s 00:02:32.382 user 0m0.429s 00:02:32.382 sys 0m0.360s 00:02:32.382 13:46:11 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:02:32.382 13:46:11 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:02:32.382 ************************************ 00:02:32.382 END TEST env_vtophys 00:02:32.382 ************************************ 00:02:32.382 13:46:11 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:02:32.382 13:46:11 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:02:32.382 13:46:11 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:02:32.382 13:46:11 env -- common/autotest_common.sh@10 -- # set +x 00:02:32.642 ************************************ 00:02:32.642 START TEST env_pci 00:02:32.642 ************************************ 00:02:32.642 13:46:11 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:02:32.642 00:02:32.642 00:02:32.642 CUnit - A unit testing framework for C - Version 2.1-3 00:02:32.642 http://cunit.sourceforge.net/ 00:02:32.642 00:02:32.642 00:02:32.642 Suite: pci 00:02:32.642 Test: pci_hook ...[2024-11-06 13:46:11.692166] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 598667 has claimed it 00:02:32.642 EAL: Cannot find device (10000:00:01.0) 00:02:32.642 EAL: Failed to attach device on primary process 00:02:32.642 passed 00:02:32.642 00:02:32.642 Run Summary: Type Total Ran Passed Failed Inactive 00:02:32.642 suites 1 1 n/a 0 0 00:02:32.642 tests 1 1 1 0 0 00:02:32.642 asserts 25 25 25 0 n/a 00:02:32.642 00:02:32.642 Elapsed time = 0.024 seconds 00:02:32.642 00:02:32.642 real 0m0.035s 00:02:32.642 user 0m0.007s 00:02:32.642 sys 0m0.027s 00:02:32.642 13:46:11 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:02:32.642 13:46:11 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:02:32.642 ************************************ 00:02:32.642 END TEST env_pci 00:02:32.642 ************************************ 00:02:32.642 13:46:11 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:02:32.642 13:46:11 env -- env/env.sh@15 -- # uname 00:02:32.642 13:46:11 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:02:32.642 13:46:11 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:02:32.642 13:46:11 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:02:32.642 13:46:11 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:02:32.642 13:46:11 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:02:32.642 13:46:11 env -- common/autotest_common.sh@10 -- # set +x 00:02:32.642 ************************************ 00:02:32.642 START TEST env_dpdk_post_init 00:02:32.642 ************************************ 00:02:32.642 13:46:11 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:02:32.642 EAL: Detected CPU lcores: 128 00:02:32.642 EAL: Detected NUMA nodes: 2 00:02:32.642 EAL: Detected shared linkage of DPDK 00:02:32.642 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:02:32.642 EAL: Selected IOVA mode 'VA' 00:02:32.642 EAL: VFIO support initialized 00:02:32.642 TELEMETRY: No legacy callbacks, legacy socket not created 00:02:32.642 EAL: Using IOMMU type 1 (Type 1) 00:02:32.902 EAL: Ignore mapping IO port bar(1) 00:02:32.902 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:02:33.162 EAL: Ignore mapping IO port bar(1) 00:02:33.162 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:02:33.421 EAL: Ignore mapping IO port bar(1) 00:02:33.421 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:02:33.421 EAL: Ignore mapping IO port bar(1) 00:02:33.682 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:02:33.682 EAL: Ignore mapping IO port bar(1) 00:02:34.028 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:02:34.028 EAL: Ignore mapping IO port bar(1) 00:02:34.028 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:02:34.306 EAL: Ignore mapping IO port bar(1) 00:02:34.306 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:02:34.306 EAL: Ignore mapping IO port bar(1) 00:02:34.567 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:02:34.828 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:02:34.828 EAL: Ignore mapping IO port bar(1) 00:02:35.087 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:02:35.087 EAL: Ignore mapping IO port bar(1) 00:02:35.087 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:02:35.347 EAL: Ignore mapping IO port bar(1) 00:02:35.347 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:02:35.607 EAL: Ignore mapping IO port bar(1) 00:02:35.607 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:02:35.868 EAL: Ignore mapping IO port bar(1) 00:02:35.868 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:02:36.127 EAL: Ignore mapping IO port bar(1) 00:02:36.127 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:02:36.127 EAL: Ignore mapping IO port bar(1) 00:02:36.385 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:02:36.385 EAL: Ignore mapping IO port bar(1) 00:02:36.644 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:02:36.644 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:02:36.644 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:02:36.644 Starting DPDK initialization... 00:02:36.644 Starting SPDK post initialization... 00:02:36.644 SPDK NVMe probe 00:02:36.644 Attaching to 0000:65:00.0 00:02:36.644 Attached to 0000:65:00.0 00:02:36.644 Cleaning up... 00:02:38.553 00:02:38.553 real 0m5.729s 00:02:38.553 user 0m0.094s 00:02:38.553 sys 0m0.184s 00:02:38.553 13:46:17 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:02:38.553 13:46:17 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:02:38.553 ************************************ 00:02:38.553 END TEST env_dpdk_post_init 00:02:38.553 ************************************ 00:02:38.553 13:46:17 env -- env/env.sh@26 -- # uname 00:02:38.553 13:46:17 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:02:38.553 13:46:17 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:02:38.553 13:46:17 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:02:38.553 13:46:17 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:02:38.553 13:46:17 env -- common/autotest_common.sh@10 -- # set +x 00:02:38.553 ************************************ 00:02:38.553 START TEST env_mem_callbacks 00:02:38.553 ************************************ 00:02:38.553 13:46:17 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:02:38.553 EAL: Detected CPU lcores: 128 00:02:38.553 EAL: Detected NUMA nodes: 2 00:02:38.553 EAL: Detected shared linkage of DPDK 00:02:38.553 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:02:38.553 EAL: Selected IOVA mode 'VA' 00:02:38.553 EAL: VFIO support initialized 00:02:38.553 TELEMETRY: No legacy callbacks, legacy socket not created 00:02:38.553 00:02:38.553 00:02:38.553 CUnit - A unit testing framework for C - Version 2.1-3 00:02:38.553 http://cunit.sourceforge.net/ 00:02:38.553 00:02:38.553 00:02:38.553 Suite: memory 00:02:38.553 Test: test ... 00:02:38.553 register 0x200000200000 2097152 00:02:38.553 malloc 3145728 00:02:38.553 register 0x200000400000 4194304 00:02:38.553 buf 0x200000500000 len 3145728 PASSED 00:02:38.553 malloc 64 00:02:38.553 buf 0x2000004fff40 len 64 PASSED 00:02:38.553 malloc 4194304 00:02:38.553 register 0x200000800000 6291456 00:02:38.553 buf 0x200000a00000 len 4194304 PASSED 00:02:38.553 free 0x200000500000 3145728 00:02:38.553 free 0x2000004fff40 64 00:02:38.553 unregister 0x200000400000 4194304 PASSED 00:02:38.553 free 0x200000a00000 4194304 00:02:38.553 unregister 0x200000800000 6291456 PASSED 00:02:38.553 malloc 8388608 00:02:38.553 register 0x200000400000 10485760 00:02:38.553 buf 0x200000600000 len 8388608 PASSED 00:02:38.553 free 0x200000600000 8388608 00:02:38.553 unregister 0x200000400000 10485760 PASSED 00:02:38.553 passed 00:02:38.553 00:02:38.553 Run Summary: Type Total Ran Passed Failed Inactive 00:02:38.553 suites 1 1 n/a 0 0 00:02:38.553 tests 1 1 1 0 0 00:02:38.553 asserts 15 15 15 0 n/a 00:02:38.553 00:02:38.553 Elapsed time = 0.008 seconds 00:02:38.553 00:02:38.553 real 0m0.053s 00:02:38.553 user 0m0.012s 00:02:38.553 sys 0m0.042s 00:02:38.553 13:46:17 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:02:38.553 13:46:17 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:02:38.553 ************************************ 00:02:38.553 END TEST env_mem_callbacks 00:02:38.553 ************************************ 00:02:38.553 00:02:38.553 real 0m7.209s 00:02:38.553 user 0m0.869s 00:02:38.553 sys 0m0.880s 00:02:38.553 13:46:17 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:02:38.553 13:46:17 env -- common/autotest_common.sh@10 -- # set +x 00:02:38.553 ************************************ 00:02:38.553 END TEST env 00:02:38.553 ************************************ 00:02:38.553 13:46:17 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:02:38.553 13:46:17 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:02:38.553 13:46:17 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:02:38.553 13:46:17 -- common/autotest_common.sh@10 -- # set +x 00:02:38.553 ************************************ 00:02:38.553 START TEST rpc 00:02:38.553 ************************************ 00:02:38.553 13:46:17 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:02:38.553 * Looking for test storage... 00:02:38.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:02:38.553 13:46:17 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:02:38.553 13:46:17 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:02:38.553 13:46:17 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:02:38.553 13:46:17 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:02:38.553 13:46:17 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:38.553 13:46:17 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:38.553 13:46:17 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:38.553 13:46:17 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:02:38.553 13:46:17 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:02:38.553 13:46:17 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:02:38.553 13:46:17 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:02:38.553 13:46:17 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:02:38.553 13:46:17 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:02:38.553 13:46:17 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:02:38.553 13:46:17 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:38.553 13:46:17 rpc -- scripts/common.sh@344 -- # case "$op" in 00:02:38.553 13:46:17 rpc -- scripts/common.sh@345 -- # : 1 00:02:38.553 13:46:17 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:38.553 13:46:17 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:38.553 13:46:17 rpc -- scripts/common.sh@365 -- # decimal 1 00:02:38.553 13:46:17 rpc -- scripts/common.sh@353 -- # local d=1 00:02:38.553 13:46:17 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:38.553 13:46:17 rpc -- scripts/common.sh@355 -- # echo 1 00:02:38.553 13:46:17 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:02:38.553 13:46:17 rpc -- scripts/common.sh@366 -- # decimal 2 00:02:38.553 13:46:17 rpc -- scripts/common.sh@353 -- # local d=2 00:02:38.553 13:46:17 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:38.553 13:46:17 rpc -- scripts/common.sh@355 -- # echo 2 00:02:38.553 13:46:17 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:02:38.553 13:46:17 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:38.553 13:46:17 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:38.553 13:46:17 rpc -- scripts/common.sh@368 -- # return 0 00:02:38.553 13:46:17 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:38.553 13:46:17 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:02:38.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:38.553 --rc genhtml_branch_coverage=1 00:02:38.553 --rc genhtml_function_coverage=1 00:02:38.553 --rc genhtml_legend=1 00:02:38.553 --rc geninfo_all_blocks=1 00:02:38.553 --rc geninfo_unexecuted_blocks=1 00:02:38.553 00:02:38.553 ' 00:02:38.553 13:46:17 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:02:38.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:38.553 --rc genhtml_branch_coverage=1 00:02:38.553 --rc genhtml_function_coverage=1 00:02:38.553 --rc genhtml_legend=1 00:02:38.553 --rc geninfo_all_blocks=1 00:02:38.553 --rc geninfo_unexecuted_blocks=1 00:02:38.553 00:02:38.553 ' 00:02:38.553 13:46:17 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:02:38.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:38.553 --rc genhtml_branch_coverage=1 00:02:38.553 --rc genhtml_function_coverage=1 00:02:38.553 --rc genhtml_legend=1 00:02:38.553 --rc geninfo_all_blocks=1 00:02:38.553 --rc geninfo_unexecuted_blocks=1 00:02:38.553 00:02:38.553 ' 00:02:38.553 13:46:17 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:02:38.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:38.553 --rc genhtml_branch_coverage=1 00:02:38.553 --rc genhtml_function_coverage=1 00:02:38.553 --rc genhtml_legend=1 00:02:38.553 --rc geninfo_all_blocks=1 00:02:38.553 --rc geninfo_unexecuted_blocks=1 00:02:38.553 00:02:38.553 ' 00:02:38.553 13:46:17 rpc -- rpc/rpc.sh@65 -- # spdk_pid=600147 00:02:38.553 13:46:17 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:02:38.553 13:46:17 rpc -- rpc/rpc.sh@67 -- # waitforlisten 600147 00:02:38.553 13:46:17 rpc -- common/autotest_common.sh@833 -- # '[' -z 600147 ']' 00:02:38.553 13:46:17 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:02:38.553 13:46:17 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:02:38.553 13:46:17 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:02:38.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:02:38.553 13:46:17 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:02:38.553 13:46:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:02:38.553 13:46:17 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:02:38.813 [2024-11-06 13:46:17.858365] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:02:38.813 [2024-11-06 13:46:17.858435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid600147 ] 00:02:38.813 [2024-11-06 13:46:17.945219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:02:38.813 [2024-11-06 13:46:17.997854] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:02:38.813 [2024-11-06 13:46:17.997909] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 600147' to capture a snapshot of events at runtime. 00:02:38.813 [2024-11-06 13:46:17.997918] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:02:38.813 [2024-11-06 13:46:17.997926] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:02:38.813 [2024-11-06 13:46:17.997932] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid600147 for offline analysis/debug. 00:02:38.813 [2024-11-06 13:46:17.998725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:02:39.383 13:46:18 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:02:39.383 13:46:18 rpc -- common/autotest_common.sh@866 -- # return 0 00:02:39.383 13:46:18 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:02:39.383 13:46:18 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:02:39.383 13:46:18 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:02:39.383 13:46:18 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:02:39.383 13:46:18 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:02:39.383 13:46:18 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:02:39.383 13:46:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:02:39.644 ************************************ 00:02:39.644 START TEST rpc_integrity 00:02:39.644 ************************************ 00:02:39.644 13:46:18 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:02:39.644 13:46:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:02:39.644 13:46:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:02:39.644 13:46:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:02:39.644 13:46:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:02:39.644 13:46:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:02:39.644 13:46:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:02:39.644 13:46:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:02:39.644 13:46:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:02:39.644 13:46:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:02:39.644 13:46:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:02:39.644 13:46:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:02:39.644 13:46:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:02:39.644 13:46:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:02:39.644 13:46:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:02:39.644 13:46:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:02:39.644 13:46:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:02:39.644 13:46:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:02:39.644 { 00:02:39.644 "name": "Malloc0", 00:02:39.644 "aliases": [ 00:02:39.644 "29da1d78-bbe1-480d-8a55-3a613af67caa" 00:02:39.644 ], 00:02:39.644 "product_name": "Malloc disk", 00:02:39.644 "block_size": 512, 00:02:39.644 "num_blocks": 16384, 00:02:39.644 "uuid": "29da1d78-bbe1-480d-8a55-3a613af67caa", 00:02:39.644 "assigned_rate_limits": { 00:02:39.644 "rw_ios_per_sec": 0, 00:02:39.644 "rw_mbytes_per_sec": 0, 00:02:39.644 "r_mbytes_per_sec": 0, 00:02:39.644 "w_mbytes_per_sec": 0 00:02:39.644 }, 00:02:39.644 "claimed": false, 00:02:39.644 "zoned": false, 00:02:39.644 "supported_io_types": { 00:02:39.644 "read": true, 00:02:39.644 "write": true, 00:02:39.644 "unmap": true, 00:02:39.644 "flush": true, 00:02:39.644 "reset": true, 00:02:39.644 "nvme_admin": false, 00:02:39.644 "nvme_io": false, 00:02:39.644 "nvme_io_md": false, 00:02:39.644 "write_zeroes": true, 00:02:39.644 "zcopy": true, 00:02:39.644 "get_zone_info": false, 00:02:39.644 "zone_management": false, 00:02:39.644 "zone_append": false, 00:02:39.644 "compare": false, 00:02:39.644 "compare_and_write": false, 00:02:39.644 "abort": true, 00:02:39.644 "seek_hole": false, 00:02:39.644 "seek_data": false, 00:02:39.644 "copy": true, 00:02:39.644 "nvme_iov_md": false 00:02:39.644 }, 00:02:39.644 "memory_domains": [ 00:02:39.644 { 00:02:39.644 "dma_device_id": "system", 00:02:39.644 "dma_device_type": 1 00:02:39.644 }, 00:02:39.644 { 00:02:39.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:02:39.644 "dma_device_type": 2 00:02:39.644 } 00:02:39.644 ], 00:02:39.644 "driver_specific": {} 00:02:39.644 } 00:02:39.644 ]' 00:02:39.644 13:46:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:02:39.644 13:46:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:02:39.644 13:46:18 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:02:39.644 13:46:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:02:39.644 13:46:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:02:39.644 [2024-11-06 13:46:18.786702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:02:39.644 [2024-11-06 13:46:18.786750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:02:39.644 [2024-11-06 13:46:18.786765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d01580 00:02:39.644 [2024-11-06 13:46:18.786773] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:02:39.644 [2024-11-06 13:46:18.788361] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:02:39.644 [2024-11-06 13:46:18.788399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:02:39.644 Passthru0 00:02:39.644 13:46:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:02:39.644 13:46:18 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:02:39.644 13:46:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:02:39.644 13:46:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:02:39.644 13:46:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:02:39.644 13:46:18 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:02:39.644 { 00:02:39.644 "name": "Malloc0", 00:02:39.644 "aliases": [ 00:02:39.644 "29da1d78-bbe1-480d-8a55-3a613af67caa" 00:02:39.644 ], 00:02:39.644 "product_name": "Malloc disk", 00:02:39.644 "block_size": 512, 00:02:39.644 "num_blocks": 16384, 00:02:39.644 "uuid": "29da1d78-bbe1-480d-8a55-3a613af67caa", 00:02:39.644 "assigned_rate_limits": { 00:02:39.644 "rw_ios_per_sec": 0, 00:02:39.644 "rw_mbytes_per_sec": 0, 00:02:39.644 "r_mbytes_per_sec": 0, 00:02:39.644 "w_mbytes_per_sec": 0 00:02:39.644 }, 00:02:39.644 "claimed": true, 00:02:39.644 "claim_type": "exclusive_write", 00:02:39.644 "zoned": false, 00:02:39.644 "supported_io_types": { 00:02:39.644 "read": true, 00:02:39.644 "write": true, 00:02:39.644 "unmap": true, 00:02:39.644 "flush": true, 00:02:39.644 "reset": true, 00:02:39.644 "nvme_admin": false, 00:02:39.644 "nvme_io": false, 00:02:39.644 "nvme_io_md": false, 00:02:39.644 "write_zeroes": true, 00:02:39.644 "zcopy": true, 00:02:39.644 "get_zone_info": false, 00:02:39.644 "zone_management": false, 00:02:39.644 "zone_append": false, 00:02:39.644 "compare": false, 00:02:39.644 "compare_and_write": false, 00:02:39.644 "abort": true, 00:02:39.644 "seek_hole": false, 00:02:39.644 "seek_data": false, 00:02:39.644 "copy": true, 00:02:39.644 "nvme_iov_md": false 00:02:39.644 }, 00:02:39.644 "memory_domains": [ 00:02:39.644 { 00:02:39.644 "dma_device_id": "system", 00:02:39.644 "dma_device_type": 1 00:02:39.644 }, 00:02:39.644 { 00:02:39.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:02:39.644 "dma_device_type": 2 00:02:39.644 } 00:02:39.644 ], 00:02:39.644 "driver_specific": {} 00:02:39.644 }, 00:02:39.644 { 00:02:39.644 "name": "Passthru0", 00:02:39.644 "aliases": [ 00:02:39.644 "bc70a59a-7822-5696-8383-c42f97131869" 00:02:39.644 ], 00:02:39.644 "product_name": "passthru", 00:02:39.644 "block_size": 512, 00:02:39.644 "num_blocks": 16384, 00:02:39.644 "uuid": "bc70a59a-7822-5696-8383-c42f97131869", 00:02:39.644 "assigned_rate_limits": { 00:02:39.644 "rw_ios_per_sec": 0, 00:02:39.644 "rw_mbytes_per_sec": 0, 00:02:39.644 "r_mbytes_per_sec": 0, 00:02:39.644 "w_mbytes_per_sec": 0 00:02:39.644 }, 00:02:39.644 "claimed": false, 00:02:39.644 "zoned": false, 00:02:39.644 "supported_io_types": { 00:02:39.644 "read": true, 00:02:39.644 "write": true, 00:02:39.644 "unmap": true, 00:02:39.644 "flush": true, 00:02:39.644 "reset": true, 00:02:39.644 "nvme_admin": false, 00:02:39.644 "nvme_io": false, 00:02:39.644 "nvme_io_md": false, 00:02:39.644 "write_zeroes": true, 00:02:39.644 "zcopy": true, 00:02:39.644 "get_zone_info": false, 00:02:39.644 "zone_management": false, 00:02:39.644 "zone_append": false, 00:02:39.644 "compare": false, 00:02:39.644 "compare_and_write": false, 00:02:39.644 "abort": true, 00:02:39.644 "seek_hole": false, 00:02:39.644 "seek_data": false, 00:02:39.644 "copy": true, 00:02:39.644 "nvme_iov_md": false 00:02:39.644 }, 00:02:39.644 "memory_domains": [ 00:02:39.644 { 00:02:39.644 "dma_device_id": "system", 00:02:39.644 "dma_device_type": 1 00:02:39.644 }, 00:02:39.644 { 00:02:39.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:02:39.644 "dma_device_type": 2 00:02:39.644 } 00:02:39.644 ], 00:02:39.644 "driver_specific": { 00:02:39.644 "passthru": { 00:02:39.644 "name": "Passthru0", 00:02:39.644 "base_bdev_name": "Malloc0" 00:02:39.644 } 00:02:39.644 } 00:02:39.644 } 00:02:39.644 ]' 00:02:39.644 13:46:18 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:02:39.644 13:46:18 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:02:39.644 13:46:18 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:02:39.644 13:46:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:02:39.644 13:46:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:02:39.644 13:46:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:02:39.644 13:46:18 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:02:39.644 13:46:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:02:39.644 13:46:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:02:39.644 13:46:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:02:39.644 13:46:18 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:02:39.644 13:46:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:02:39.644 13:46:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:02:39.644 13:46:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:02:39.645 13:46:18 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:02:39.645 13:46:18 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:02:39.645 13:46:18 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:02:39.645 00:02:39.645 real 0m0.203s 00:02:39.645 user 0m0.111s 00:02:39.645 sys 0m0.033s 00:02:39.645 13:46:18 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:02:39.645 13:46:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:02:39.645 ************************************ 00:02:39.645 END TEST rpc_integrity 00:02:39.645 ************************************ 00:02:39.645 13:46:18 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:02:39.645 13:46:18 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:02:39.645 13:46:18 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:02:39.645 13:46:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:02:39.906 ************************************ 00:02:39.906 START TEST rpc_plugins 00:02:39.906 ************************************ 00:02:39.906 13:46:18 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:02:39.906 13:46:18 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:02:39.906 13:46:18 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:02:39.906 13:46:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:02:39.906 13:46:18 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:02:39.906 13:46:18 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:02:39.906 13:46:18 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:02:39.906 13:46:18 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:02:39.906 13:46:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:02:39.906 13:46:18 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:02:39.906 13:46:18 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:02:39.906 { 00:02:39.906 "name": "Malloc1", 00:02:39.906 "aliases": [ 00:02:39.906 "80440454-fa58-4537-be57-b063378ea6a6" 00:02:39.906 ], 00:02:39.906 "product_name": "Malloc disk", 00:02:39.906 "block_size": 4096, 00:02:39.906 "num_blocks": 256, 00:02:39.906 "uuid": "80440454-fa58-4537-be57-b063378ea6a6", 00:02:39.906 "assigned_rate_limits": { 00:02:39.906 "rw_ios_per_sec": 0, 00:02:39.906 "rw_mbytes_per_sec": 0, 00:02:39.906 "r_mbytes_per_sec": 0, 00:02:39.906 "w_mbytes_per_sec": 0 00:02:39.906 }, 00:02:39.906 "claimed": false, 00:02:39.906 "zoned": false, 00:02:39.906 "supported_io_types": { 00:02:39.906 "read": true, 00:02:39.906 "write": true, 00:02:39.906 "unmap": true, 00:02:39.906 "flush": true, 00:02:39.906 "reset": true, 00:02:39.906 "nvme_admin": false, 00:02:39.906 "nvme_io": false, 00:02:39.906 "nvme_io_md": false, 00:02:39.906 "write_zeroes": true, 00:02:39.906 "zcopy": true, 00:02:39.906 "get_zone_info": false, 00:02:39.906 "zone_management": false, 00:02:39.906 "zone_append": false, 00:02:39.906 "compare": false, 00:02:39.906 "compare_and_write": false, 00:02:39.906 "abort": true, 00:02:39.906 "seek_hole": false, 00:02:39.906 "seek_data": false, 00:02:39.906 "copy": true, 00:02:39.906 "nvme_iov_md": false 00:02:39.906 }, 00:02:39.906 "memory_domains": [ 00:02:39.906 { 00:02:39.906 "dma_device_id": "system", 00:02:39.906 "dma_device_type": 1 00:02:39.906 }, 00:02:39.906 { 00:02:39.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:02:39.906 "dma_device_type": 2 00:02:39.906 } 00:02:39.906 ], 00:02:39.906 "driver_specific": {} 00:02:39.906 } 00:02:39.906 ]' 00:02:39.906 13:46:18 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:02:39.906 13:46:18 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:02:39.906 13:46:18 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:02:39.906 13:46:18 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:02:39.906 13:46:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:02:39.906 13:46:19 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:02:39.906 13:46:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:02:39.906 13:46:19 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:02:39.906 13:46:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:02:39.906 13:46:19 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:02:39.906 13:46:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:02:39.906 13:46:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:02:39.906 13:46:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:02:39.906 00:02:39.906 real 0m0.104s 00:02:39.906 user 0m0.053s 00:02:39.906 sys 0m0.018s 00:02:39.906 13:46:19 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:02:39.906 13:46:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:02:39.906 ************************************ 00:02:39.906 END TEST rpc_plugins 00:02:39.906 ************************************ 00:02:39.906 13:46:19 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:02:39.906 13:46:19 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:02:39.906 13:46:19 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:02:39.906 13:46:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:02:39.906 ************************************ 00:02:39.906 START TEST rpc_trace_cmd_test 00:02:39.906 ************************************ 00:02:39.906 13:46:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:02:39.906 13:46:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:02:39.906 13:46:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:02:39.906 13:46:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:02:39.906 13:46:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:02:39.906 13:46:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:02:39.906 13:46:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:02:39.906 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid600147", 00:02:39.906 "tpoint_group_mask": "0x8", 00:02:39.906 "iscsi_conn": { 00:02:39.906 "mask": "0x2", 00:02:39.906 "tpoint_mask": "0x0" 00:02:39.906 }, 00:02:39.906 "scsi": { 00:02:39.906 "mask": "0x4", 00:02:39.906 "tpoint_mask": "0x0" 00:02:39.906 }, 00:02:39.906 "bdev": { 00:02:39.906 "mask": "0x8", 00:02:39.906 "tpoint_mask": "0xffffffffffffffff" 00:02:39.906 }, 00:02:39.906 "nvmf_rdma": { 00:02:39.906 "mask": "0x10", 00:02:39.906 "tpoint_mask": "0x0" 00:02:39.906 }, 00:02:39.906 "nvmf_tcp": { 00:02:39.906 "mask": "0x20", 00:02:39.906 "tpoint_mask": "0x0" 00:02:39.906 }, 00:02:39.906 "ftl": { 00:02:39.906 "mask": "0x40", 00:02:39.906 "tpoint_mask": "0x0" 00:02:39.906 }, 00:02:39.906 "blobfs": { 00:02:39.906 "mask": "0x80", 00:02:39.906 "tpoint_mask": "0x0" 00:02:39.906 }, 00:02:39.906 "dsa": { 00:02:39.906 "mask": "0x200", 00:02:39.906 "tpoint_mask": "0x0" 00:02:39.906 }, 00:02:39.906 "thread": { 00:02:39.906 "mask": "0x400", 00:02:39.906 "tpoint_mask": "0x0" 00:02:39.906 }, 00:02:39.906 "nvme_pcie": { 00:02:39.906 "mask": "0x800", 00:02:39.906 "tpoint_mask": "0x0" 00:02:39.906 }, 00:02:39.906 "iaa": { 00:02:39.906 "mask": "0x1000", 00:02:39.906 "tpoint_mask": "0x0" 00:02:39.906 }, 00:02:39.906 "nvme_tcp": { 00:02:39.906 "mask": "0x2000", 00:02:39.906 "tpoint_mask": "0x0" 00:02:39.906 }, 00:02:39.906 "bdev_nvme": { 00:02:39.906 "mask": "0x4000", 00:02:39.906 "tpoint_mask": "0x0" 00:02:39.906 }, 00:02:39.906 "sock": { 00:02:39.906 "mask": "0x8000", 00:02:39.906 "tpoint_mask": "0x0" 00:02:39.906 }, 00:02:39.906 "blob": { 00:02:39.906 "mask": "0x10000", 00:02:39.906 "tpoint_mask": "0x0" 00:02:39.906 }, 00:02:39.906 "bdev_raid": { 00:02:39.906 "mask": "0x20000", 00:02:39.906 "tpoint_mask": "0x0" 00:02:39.906 }, 00:02:39.906 "scheduler": { 00:02:39.906 "mask": "0x40000", 00:02:39.906 "tpoint_mask": "0x0" 00:02:39.906 } 00:02:39.906 }' 00:02:39.906 13:46:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:02:39.906 13:46:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:02:39.906 13:46:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:02:39.906 13:46:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:02:39.906 13:46:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:02:40.167 13:46:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:02:40.167 13:46:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:02:40.167 13:46:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:02:40.167 13:46:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:02:40.167 13:46:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:02:40.167 00:02:40.167 real 0m0.156s 00:02:40.167 user 0m0.124s 00:02:40.167 sys 0m0.023s 00:02:40.167 13:46:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:02:40.167 13:46:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:02:40.167 ************************************ 00:02:40.167 END TEST rpc_trace_cmd_test 00:02:40.167 ************************************ 00:02:40.167 13:46:19 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:02:40.167 13:46:19 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:02:40.167 13:46:19 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:02:40.167 13:46:19 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:02:40.167 13:46:19 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:02:40.167 13:46:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:02:40.167 ************************************ 00:02:40.167 START TEST rpc_daemon_integrity 00:02:40.167 ************************************ 00:02:40.167 13:46:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:02:40.167 13:46:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:02:40.167 13:46:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:02:40.167 13:46:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:02:40.167 13:46:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:02:40.167 13:46:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:02:40.167 13:46:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:02:40.167 13:46:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:02:40.167 13:46:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:02:40.167 13:46:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:02:40.167 13:46:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:02:40.167 13:46:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:02:40.167 13:46:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:02:40.167 13:46:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:02:40.167 13:46:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:02:40.167 13:46:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:02:40.167 13:46:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:02:40.167 13:46:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:02:40.167 { 00:02:40.167 "name": "Malloc2", 00:02:40.167 "aliases": [ 00:02:40.167 "64a1813c-0182-4dd4-9ffd-608690b332ef" 00:02:40.167 ], 00:02:40.167 "product_name": "Malloc disk", 00:02:40.167 "block_size": 512, 00:02:40.167 "num_blocks": 16384, 00:02:40.167 "uuid": "64a1813c-0182-4dd4-9ffd-608690b332ef", 00:02:40.167 "assigned_rate_limits": { 00:02:40.167 "rw_ios_per_sec": 0, 00:02:40.167 "rw_mbytes_per_sec": 0, 00:02:40.167 "r_mbytes_per_sec": 0, 00:02:40.167 "w_mbytes_per_sec": 0 00:02:40.167 }, 00:02:40.167 "claimed": false, 00:02:40.167 "zoned": false, 00:02:40.167 "supported_io_types": { 00:02:40.167 "read": true, 00:02:40.167 "write": true, 00:02:40.167 "unmap": true, 00:02:40.167 "flush": true, 00:02:40.167 "reset": true, 00:02:40.167 "nvme_admin": false, 00:02:40.167 "nvme_io": false, 00:02:40.167 "nvme_io_md": false, 00:02:40.167 "write_zeroes": true, 00:02:40.167 "zcopy": true, 00:02:40.167 "get_zone_info": false, 00:02:40.167 "zone_management": false, 00:02:40.167 "zone_append": false, 00:02:40.167 "compare": false, 00:02:40.167 "compare_and_write": false, 00:02:40.167 "abort": true, 00:02:40.167 "seek_hole": false, 00:02:40.167 "seek_data": false, 00:02:40.167 "copy": true, 00:02:40.167 "nvme_iov_md": false 00:02:40.167 }, 00:02:40.167 "memory_domains": [ 00:02:40.167 { 00:02:40.167 "dma_device_id": "system", 00:02:40.167 "dma_device_type": 1 00:02:40.167 }, 00:02:40.167 { 00:02:40.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:02:40.167 "dma_device_type": 2 00:02:40.167 } 00:02:40.167 ], 00:02:40.167 "driver_specific": {} 00:02:40.167 } 00:02:40.167 ]' 00:02:40.167 13:46:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:02:40.167 13:46:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:02:40.167 13:46:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:02:40.167 13:46:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:02:40.167 13:46:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:02:40.167 [2024-11-06 13:46:19.404361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:02:40.167 [2024-11-06 13:46:19.404402] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:02:40.168 [2024-11-06 13:46:19.404416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1bbee00 00:02:40.168 [2024-11-06 13:46:19.404423] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:02:40.168 [2024-11-06 13:46:19.405866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:02:40.168 [2024-11-06 13:46:19.405903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:02:40.168 Passthru0 00:02:40.168 13:46:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:02:40.168 13:46:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:02:40.168 13:46:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:02:40.168 13:46:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:02:40.168 13:46:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:02:40.168 13:46:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:02:40.168 { 00:02:40.168 "name": "Malloc2", 00:02:40.168 "aliases": [ 00:02:40.168 "64a1813c-0182-4dd4-9ffd-608690b332ef" 00:02:40.168 ], 00:02:40.168 "product_name": "Malloc disk", 00:02:40.168 "block_size": 512, 00:02:40.168 "num_blocks": 16384, 00:02:40.168 "uuid": "64a1813c-0182-4dd4-9ffd-608690b332ef", 00:02:40.168 "assigned_rate_limits": { 00:02:40.168 "rw_ios_per_sec": 0, 00:02:40.168 "rw_mbytes_per_sec": 0, 00:02:40.168 "r_mbytes_per_sec": 0, 00:02:40.168 "w_mbytes_per_sec": 0 00:02:40.168 }, 00:02:40.168 "claimed": true, 00:02:40.168 "claim_type": "exclusive_write", 00:02:40.168 "zoned": false, 00:02:40.168 "supported_io_types": { 00:02:40.168 "read": true, 00:02:40.168 "write": true, 00:02:40.168 "unmap": true, 00:02:40.168 "flush": true, 00:02:40.168 "reset": true, 00:02:40.168 "nvme_admin": false, 00:02:40.168 "nvme_io": false, 00:02:40.168 "nvme_io_md": false, 00:02:40.168 "write_zeroes": true, 00:02:40.168 "zcopy": true, 00:02:40.168 "get_zone_info": false, 00:02:40.168 "zone_management": false, 00:02:40.168 "zone_append": false, 00:02:40.168 "compare": false, 00:02:40.168 "compare_and_write": false, 00:02:40.168 "abort": true, 00:02:40.168 "seek_hole": false, 00:02:40.168 "seek_data": false, 00:02:40.168 "copy": true, 00:02:40.168 "nvme_iov_md": false 00:02:40.168 }, 00:02:40.168 "memory_domains": [ 00:02:40.168 { 00:02:40.168 "dma_device_id": "system", 00:02:40.168 "dma_device_type": 1 00:02:40.168 }, 00:02:40.168 { 00:02:40.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:02:40.168 "dma_device_type": 2 00:02:40.168 } 00:02:40.168 ], 00:02:40.168 "driver_specific": {} 00:02:40.168 }, 00:02:40.168 { 00:02:40.168 "name": "Passthru0", 00:02:40.168 "aliases": [ 00:02:40.168 "94667fcc-2270-5e38-a455-d52baf5cb626" 00:02:40.168 ], 00:02:40.168 "product_name": "passthru", 00:02:40.168 "block_size": 512, 00:02:40.168 "num_blocks": 16384, 00:02:40.168 "uuid": "94667fcc-2270-5e38-a455-d52baf5cb626", 00:02:40.168 "assigned_rate_limits": { 00:02:40.168 "rw_ios_per_sec": 0, 00:02:40.168 "rw_mbytes_per_sec": 0, 00:02:40.168 "r_mbytes_per_sec": 0, 00:02:40.168 "w_mbytes_per_sec": 0 00:02:40.168 }, 00:02:40.168 "claimed": false, 00:02:40.168 "zoned": false, 00:02:40.168 "supported_io_types": { 00:02:40.168 "read": true, 00:02:40.168 "write": true, 00:02:40.168 "unmap": true, 00:02:40.168 "flush": true, 00:02:40.168 "reset": true, 00:02:40.168 "nvme_admin": false, 00:02:40.168 "nvme_io": false, 00:02:40.168 "nvme_io_md": false, 00:02:40.168 "write_zeroes": true, 00:02:40.168 "zcopy": true, 00:02:40.168 "get_zone_info": false, 00:02:40.168 "zone_management": false, 00:02:40.168 "zone_append": false, 00:02:40.168 "compare": false, 00:02:40.168 "compare_and_write": false, 00:02:40.168 "abort": true, 00:02:40.168 "seek_hole": false, 00:02:40.168 "seek_data": false, 00:02:40.168 "copy": true, 00:02:40.168 "nvme_iov_md": false 00:02:40.168 }, 00:02:40.168 "memory_domains": [ 00:02:40.168 { 00:02:40.168 "dma_device_id": "system", 00:02:40.168 "dma_device_type": 1 00:02:40.168 }, 00:02:40.168 { 00:02:40.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:02:40.168 "dma_device_type": 2 00:02:40.168 } 00:02:40.168 ], 00:02:40.168 "driver_specific": { 00:02:40.168 "passthru": { 00:02:40.168 "name": "Passthru0", 00:02:40.168 "base_bdev_name": "Malloc2" 00:02:40.168 } 00:02:40.168 } 00:02:40.168 } 00:02:40.168 ]' 00:02:40.168 13:46:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:02:40.429 13:46:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:02:40.429 13:46:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:02:40.429 13:46:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:02:40.429 13:46:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:02:40.429 13:46:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:02:40.429 13:46:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:02:40.429 13:46:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:02:40.429 13:46:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:02:40.429 13:46:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:02:40.429 13:46:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:02:40.429 13:46:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:02:40.429 13:46:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:02:40.429 13:46:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:02:40.429 13:46:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:02:40.429 13:46:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:02:40.429 13:46:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:02:40.429 00:02:40.429 real 0m0.205s 00:02:40.429 user 0m0.116s 00:02:40.429 sys 0m0.028s 00:02:40.429 13:46:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:02:40.429 13:46:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:02:40.429 ************************************ 00:02:40.429 END TEST rpc_daemon_integrity 00:02:40.429 ************************************ 00:02:40.429 13:46:19 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:02:40.429 13:46:19 rpc -- rpc/rpc.sh@84 -- # killprocess 600147 00:02:40.429 13:46:19 rpc -- common/autotest_common.sh@952 -- # '[' -z 600147 ']' 00:02:40.429 13:46:19 rpc -- common/autotest_common.sh@956 -- # kill -0 600147 00:02:40.429 13:46:19 rpc -- common/autotest_common.sh@957 -- # uname 00:02:40.429 13:46:19 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:02:40.429 13:46:19 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 600147 00:02:40.429 13:46:19 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:02:40.429 13:46:19 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:02:40.429 13:46:19 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 600147' 00:02:40.429 killing process with pid 600147 00:02:40.429 13:46:19 rpc -- common/autotest_common.sh@971 -- # kill 600147 00:02:40.429 13:46:19 rpc -- common/autotest_common.sh@976 -- # wait 600147 00:02:40.689 00:02:40.689 real 0m2.144s 00:02:40.689 user 0m2.609s 00:02:40.689 sys 0m0.634s 00:02:40.689 13:46:19 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:02:40.689 13:46:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:02:40.689 ************************************ 00:02:40.689 END TEST rpc 00:02:40.689 ************************************ 00:02:40.689 13:46:19 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:02:40.689 13:46:19 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:02:40.689 13:46:19 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:02:40.689 13:46:19 -- common/autotest_common.sh@10 -- # set +x 00:02:40.689 ************************************ 00:02:40.689 START TEST skip_rpc 00:02:40.689 ************************************ 00:02:40.689 13:46:19 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:02:40.689 * Looking for test storage... 00:02:40.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:02:40.689 13:46:19 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:02:40.689 13:46:19 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:02:40.689 13:46:19 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:02:40.949 13:46:19 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:02:40.949 13:46:19 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:40.949 13:46:19 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:40.949 13:46:19 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:40.949 13:46:19 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:02:40.949 13:46:19 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:02:40.949 13:46:19 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:02:40.949 13:46:19 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:02:40.949 13:46:19 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:02:40.949 13:46:19 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:02:40.949 13:46:19 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:02:40.949 13:46:19 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:40.949 13:46:19 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:02:40.949 13:46:19 skip_rpc -- scripts/common.sh@345 -- # : 1 00:02:40.949 13:46:19 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:40.949 13:46:19 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:40.949 13:46:19 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:02:40.949 13:46:19 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:02:40.949 13:46:19 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:40.949 13:46:19 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:02:40.949 13:46:19 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:02:40.949 13:46:19 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:02:40.949 13:46:20 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:02:40.949 13:46:20 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:40.949 13:46:20 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:02:40.949 13:46:20 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:02:40.949 13:46:20 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:40.949 13:46:20 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:40.949 13:46:20 skip_rpc -- scripts/common.sh@368 -- # return 0 00:02:40.949 13:46:20 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:40.949 13:46:20 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:02:40.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:40.949 --rc genhtml_branch_coverage=1 00:02:40.949 --rc genhtml_function_coverage=1 00:02:40.949 --rc genhtml_legend=1 00:02:40.949 --rc geninfo_all_blocks=1 00:02:40.949 --rc geninfo_unexecuted_blocks=1 00:02:40.949 00:02:40.949 ' 00:02:40.949 13:46:20 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:02:40.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:40.949 --rc genhtml_branch_coverage=1 00:02:40.949 --rc genhtml_function_coverage=1 00:02:40.949 --rc genhtml_legend=1 00:02:40.949 --rc geninfo_all_blocks=1 00:02:40.949 --rc geninfo_unexecuted_blocks=1 00:02:40.949 00:02:40.949 ' 00:02:40.949 13:46:20 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:02:40.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:40.950 --rc genhtml_branch_coverage=1 00:02:40.950 --rc genhtml_function_coverage=1 00:02:40.950 --rc genhtml_legend=1 00:02:40.950 --rc geninfo_all_blocks=1 00:02:40.950 --rc geninfo_unexecuted_blocks=1 00:02:40.950 00:02:40.950 ' 00:02:40.950 13:46:20 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:02:40.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:40.950 --rc genhtml_branch_coverage=1 00:02:40.950 --rc genhtml_function_coverage=1 00:02:40.950 --rc genhtml_legend=1 00:02:40.950 --rc geninfo_all_blocks=1 00:02:40.950 --rc geninfo_unexecuted_blocks=1 00:02:40.950 00:02:40.950 ' 00:02:40.950 13:46:20 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:02:40.950 13:46:20 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:02:40.950 13:46:20 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:02:40.950 13:46:20 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:02:40.950 13:46:20 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:02:40.950 13:46:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:02:40.950 ************************************ 00:02:40.950 START TEST skip_rpc 00:02:40.950 ************************************ 00:02:40.950 13:46:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:02:40.950 13:46:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=600746 00:02:40.950 13:46:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:02:40.950 13:46:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:02:40.950 13:46:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:02:40.950 [2024-11-06 13:46:20.075232] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:02:40.950 [2024-11-06 13:46:20.075299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid600746 ] 00:02:40.950 [2024-11-06 13:46:20.160899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:02:40.950 [2024-11-06 13:46:20.215519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:02:46.222 13:46:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:02:46.222 13:46:25 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:02:46.222 13:46:25 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:02:46.222 13:46:25 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:02:46.222 13:46:25 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:02:46.223 13:46:25 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:02:46.223 13:46:25 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:02:46.223 13:46:25 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:02:46.223 13:46:25 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:02:46.223 13:46:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:02:46.223 13:46:25 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:02:46.223 13:46:25 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:02:46.223 13:46:25 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:02:46.223 13:46:25 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:02:46.223 13:46:25 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:02:46.223 13:46:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:02:46.223 13:46:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 600746 00:02:46.223 13:46:25 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 600746 ']' 00:02:46.223 13:46:25 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 600746 00:02:46.223 13:46:25 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:02:46.223 13:46:25 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:02:46.223 13:46:25 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 600746 00:02:46.223 13:46:25 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:02:46.223 13:46:25 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:02:46.223 13:46:25 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 600746' 00:02:46.223 killing process with pid 600746 00:02:46.223 13:46:25 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 600746 00:02:46.223 13:46:25 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 600746 00:02:46.223 00:02:46.223 real 0m5.240s 00:02:46.223 user 0m4.989s 00:02:46.223 sys 0m0.278s 00:02:46.223 13:46:25 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:02:46.223 13:46:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:02:46.223 ************************************ 00:02:46.223 END TEST skip_rpc 00:02:46.223 ************************************ 00:02:46.223 13:46:25 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:02:46.223 13:46:25 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:02:46.223 13:46:25 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:02:46.223 13:46:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:02:46.223 ************************************ 00:02:46.223 START TEST skip_rpc_with_json 00:02:46.223 ************************************ 00:02:46.223 13:46:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:02:46.223 13:46:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:02:46.223 13:46:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=602024 00:02:46.223 13:46:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:02:46.223 13:46:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 602024 00:02:46.223 13:46:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 602024 ']' 00:02:46.223 13:46:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:02:46.223 13:46:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:02:46.223 13:46:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:02:46.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:02:46.223 13:46:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:02:46.223 13:46:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:02:46.223 13:46:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:02:46.223 [2024-11-06 13:46:25.359261] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:02:46.223 [2024-11-06 13:46:25.359308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid602024 ] 00:02:46.223 [2024-11-06 13:46:25.424734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:02:46.223 [2024-11-06 13:46:25.452828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:02:46.482 13:46:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:02:46.482 13:46:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:02:46.482 13:46:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:02:46.482 13:46:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:02:46.482 13:46:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:02:46.482 [2024-11-06 13:46:25.620209] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:02:46.482 request: 00:02:46.482 { 00:02:46.482 "trtype": "tcp", 00:02:46.482 "method": "nvmf_get_transports", 00:02:46.482 "req_id": 1 00:02:46.482 } 00:02:46.482 Got JSON-RPC error response 00:02:46.482 response: 00:02:46.482 { 00:02:46.482 "code": -19, 00:02:46.482 "message": "No such device" 00:02:46.482 } 00:02:46.482 13:46:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:02:46.482 13:46:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:02:46.482 13:46:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:02:46.482 13:46:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:02:46.482 [2024-11-06 13:46:25.628295] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:02:46.482 13:46:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:02:46.482 13:46:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:02:46.482 13:46:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:02:46.482 13:46:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:02:46.742 13:46:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:02:46.742 13:46:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:02:46.742 { 00:02:46.742 "subsystems": [ 00:02:46.742 { 00:02:46.742 "subsystem": "fsdev", 00:02:46.742 "config": [ 00:02:46.742 { 00:02:46.742 "method": "fsdev_set_opts", 00:02:46.742 "params": { 00:02:46.742 "fsdev_io_pool_size": 65535, 00:02:46.742 "fsdev_io_cache_size": 256 00:02:46.742 } 00:02:46.742 } 00:02:46.742 ] 00:02:46.742 }, 00:02:46.742 { 00:02:46.742 "subsystem": "vfio_user_target", 00:02:46.742 "config": null 00:02:46.742 }, 00:02:46.742 { 00:02:46.742 "subsystem": "keyring", 00:02:46.742 "config": [] 00:02:46.742 }, 00:02:46.742 { 00:02:46.742 "subsystem": "iobuf", 00:02:46.742 "config": [ 00:02:46.742 { 00:02:46.742 "method": "iobuf_set_options", 00:02:46.742 "params": { 00:02:46.742 "small_pool_count": 8192, 00:02:46.742 "large_pool_count": 1024, 00:02:46.742 "small_bufsize": 8192, 00:02:46.742 "large_bufsize": 135168, 00:02:46.742 "enable_numa": false 00:02:46.742 } 00:02:46.742 } 00:02:46.742 ] 00:02:46.742 }, 00:02:46.742 { 00:02:46.742 "subsystem": "sock", 00:02:46.742 "config": [ 00:02:46.742 { 00:02:46.742 "method": "sock_set_default_impl", 00:02:46.742 "params": { 00:02:46.742 "impl_name": "posix" 00:02:46.742 } 00:02:46.742 }, 00:02:46.742 { 00:02:46.742 "method": "sock_impl_set_options", 00:02:46.742 "params": { 00:02:46.742 "impl_name": "ssl", 00:02:46.742 "recv_buf_size": 4096, 00:02:46.742 "send_buf_size": 4096, 00:02:46.742 "enable_recv_pipe": true, 00:02:46.742 "enable_quickack": false, 00:02:46.742 "enable_placement_id": 0, 00:02:46.742 "enable_zerocopy_send_server": true, 00:02:46.742 "enable_zerocopy_send_client": false, 00:02:46.742 "zerocopy_threshold": 0, 00:02:46.742 "tls_version": 0, 00:02:46.742 "enable_ktls": false 00:02:46.742 } 00:02:46.742 }, 00:02:46.742 { 00:02:46.742 "method": "sock_impl_set_options", 00:02:46.742 "params": { 00:02:46.742 "impl_name": "posix", 00:02:46.742 "recv_buf_size": 2097152, 00:02:46.742 "send_buf_size": 2097152, 00:02:46.742 "enable_recv_pipe": true, 00:02:46.742 "enable_quickack": false, 00:02:46.742 "enable_placement_id": 0, 00:02:46.742 "enable_zerocopy_send_server": true, 00:02:46.742 "enable_zerocopy_send_client": false, 00:02:46.742 "zerocopy_threshold": 0, 00:02:46.742 "tls_version": 0, 00:02:46.742 "enable_ktls": false 00:02:46.742 } 00:02:46.742 } 00:02:46.742 ] 00:02:46.742 }, 00:02:46.742 { 00:02:46.742 "subsystem": "vmd", 00:02:46.742 "config": [] 00:02:46.742 }, 00:02:46.742 { 00:02:46.742 "subsystem": "accel", 00:02:46.742 "config": [ 00:02:46.742 { 00:02:46.742 "method": "accel_set_options", 00:02:46.742 "params": { 00:02:46.742 "small_cache_size": 128, 00:02:46.742 "large_cache_size": 16, 00:02:46.742 "task_count": 2048, 00:02:46.742 "sequence_count": 2048, 00:02:46.742 "buf_count": 2048 00:02:46.742 } 00:02:46.742 } 00:02:46.742 ] 00:02:46.742 }, 00:02:46.742 { 00:02:46.742 "subsystem": "bdev", 00:02:46.742 "config": [ 00:02:46.742 { 00:02:46.742 "method": "bdev_set_options", 00:02:46.742 "params": { 00:02:46.742 "bdev_io_pool_size": 65535, 00:02:46.742 "bdev_io_cache_size": 256, 00:02:46.742 "bdev_auto_examine": true, 00:02:46.742 "iobuf_small_cache_size": 128, 00:02:46.742 "iobuf_large_cache_size": 16 00:02:46.742 } 00:02:46.742 }, 00:02:46.742 { 00:02:46.742 "method": "bdev_raid_set_options", 00:02:46.742 "params": { 00:02:46.742 "process_window_size_kb": 1024, 00:02:46.742 "process_max_bandwidth_mb_sec": 0 00:02:46.742 } 00:02:46.742 }, 00:02:46.742 { 00:02:46.742 "method": "bdev_iscsi_set_options", 00:02:46.742 "params": { 00:02:46.742 "timeout_sec": 30 00:02:46.742 } 00:02:46.742 }, 00:02:46.742 { 00:02:46.742 "method": "bdev_nvme_set_options", 00:02:46.742 "params": { 00:02:46.742 "action_on_timeout": "none", 00:02:46.742 "timeout_us": 0, 00:02:46.742 "timeout_admin_us": 0, 00:02:46.742 "keep_alive_timeout_ms": 10000, 00:02:46.742 "arbitration_burst": 0, 00:02:46.742 "low_priority_weight": 0, 00:02:46.742 "medium_priority_weight": 0, 00:02:46.742 "high_priority_weight": 0, 00:02:46.742 "nvme_adminq_poll_period_us": 10000, 00:02:46.742 "nvme_ioq_poll_period_us": 0, 00:02:46.742 "io_queue_requests": 0, 00:02:46.742 "delay_cmd_submit": true, 00:02:46.742 "transport_retry_count": 4, 00:02:46.742 "bdev_retry_count": 3, 00:02:46.742 "transport_ack_timeout": 0, 00:02:46.742 "ctrlr_loss_timeout_sec": 0, 00:02:46.742 "reconnect_delay_sec": 0, 00:02:46.742 "fast_io_fail_timeout_sec": 0, 00:02:46.742 "disable_auto_failback": false, 00:02:46.742 "generate_uuids": false, 00:02:46.742 "transport_tos": 0, 00:02:46.742 "nvme_error_stat": false, 00:02:46.742 "rdma_srq_size": 0, 00:02:46.742 "io_path_stat": false, 00:02:46.742 "allow_accel_sequence": false, 00:02:46.742 "rdma_max_cq_size": 0, 00:02:46.742 "rdma_cm_event_timeout_ms": 0, 00:02:46.742 "dhchap_digests": [ 00:02:46.742 "sha256", 00:02:46.742 "sha384", 00:02:46.742 "sha512" 00:02:46.742 ], 00:02:46.742 "dhchap_dhgroups": [ 00:02:46.742 "null", 00:02:46.742 "ffdhe2048", 00:02:46.742 "ffdhe3072", 00:02:46.742 "ffdhe4096", 00:02:46.742 "ffdhe6144", 00:02:46.742 "ffdhe8192" 00:02:46.742 ] 00:02:46.742 } 00:02:46.742 }, 00:02:46.742 { 00:02:46.742 "method": "bdev_nvme_set_hotplug", 00:02:46.742 "params": { 00:02:46.742 "period_us": 100000, 00:02:46.742 "enable": false 00:02:46.742 } 00:02:46.742 }, 00:02:46.742 { 00:02:46.742 "method": "bdev_wait_for_examine" 00:02:46.742 } 00:02:46.742 ] 00:02:46.742 }, 00:02:46.742 { 00:02:46.742 "subsystem": "scsi", 00:02:46.742 "config": null 00:02:46.742 }, 00:02:46.742 { 00:02:46.742 "subsystem": "scheduler", 00:02:46.742 "config": [ 00:02:46.742 { 00:02:46.742 "method": "framework_set_scheduler", 00:02:46.742 "params": { 00:02:46.742 "name": "static" 00:02:46.742 } 00:02:46.742 } 00:02:46.742 ] 00:02:46.742 }, 00:02:46.742 { 00:02:46.742 "subsystem": "vhost_scsi", 00:02:46.742 "config": [] 00:02:46.742 }, 00:02:46.742 { 00:02:46.742 "subsystem": "vhost_blk", 00:02:46.742 "config": [] 00:02:46.742 }, 00:02:46.742 { 00:02:46.742 "subsystem": "ublk", 00:02:46.742 "config": [] 00:02:46.742 }, 00:02:46.742 { 00:02:46.742 "subsystem": "nbd", 00:02:46.742 "config": [] 00:02:46.742 }, 00:02:46.742 { 00:02:46.742 "subsystem": "nvmf", 00:02:46.742 "config": [ 00:02:46.742 { 00:02:46.742 "method": "nvmf_set_config", 00:02:46.742 "params": { 00:02:46.742 "discovery_filter": "match_any", 00:02:46.742 "admin_cmd_passthru": { 00:02:46.742 "identify_ctrlr": false 00:02:46.742 }, 00:02:46.742 "dhchap_digests": [ 00:02:46.742 "sha256", 00:02:46.742 "sha384", 00:02:46.742 "sha512" 00:02:46.742 ], 00:02:46.742 "dhchap_dhgroups": [ 00:02:46.742 "null", 00:02:46.742 "ffdhe2048", 00:02:46.742 "ffdhe3072", 00:02:46.742 "ffdhe4096", 00:02:46.742 "ffdhe6144", 00:02:46.742 "ffdhe8192" 00:02:46.742 ] 00:02:46.742 } 00:02:46.742 }, 00:02:46.742 { 00:02:46.742 "method": "nvmf_set_max_subsystems", 00:02:46.742 "params": { 00:02:46.742 "max_subsystems": 1024 00:02:46.742 } 00:02:46.742 }, 00:02:46.742 { 00:02:46.742 "method": "nvmf_set_crdt", 00:02:46.742 "params": { 00:02:46.742 "crdt1": 0, 00:02:46.742 "crdt2": 0, 00:02:46.742 "crdt3": 0 00:02:46.742 } 00:02:46.742 }, 00:02:46.742 { 00:02:46.742 "method": "nvmf_create_transport", 00:02:46.742 "params": { 00:02:46.742 "trtype": "TCP", 00:02:46.742 "max_queue_depth": 128, 00:02:46.742 "max_io_qpairs_per_ctrlr": 127, 00:02:46.742 "in_capsule_data_size": 4096, 00:02:46.742 "max_io_size": 131072, 00:02:46.742 "io_unit_size": 131072, 00:02:46.742 "max_aq_depth": 128, 00:02:46.742 "num_shared_buffers": 511, 00:02:46.742 "buf_cache_size": 4294967295, 00:02:46.742 "dif_insert_or_strip": false, 00:02:46.742 "zcopy": false, 00:02:46.742 "c2h_success": true, 00:02:46.742 "sock_priority": 0, 00:02:46.742 "abort_timeout_sec": 1, 00:02:46.742 "ack_timeout": 0, 00:02:46.742 "data_wr_pool_size": 0 00:02:46.742 } 00:02:46.742 } 00:02:46.742 ] 00:02:46.742 }, 00:02:46.742 { 00:02:46.742 "subsystem": "iscsi", 00:02:46.742 "config": [ 00:02:46.742 { 00:02:46.742 "method": "iscsi_set_options", 00:02:46.742 "params": { 00:02:46.742 "node_base": "iqn.2016-06.io.spdk", 00:02:46.742 "max_sessions": 128, 00:02:46.742 "max_connections_per_session": 2, 00:02:46.742 "max_queue_depth": 64, 00:02:46.742 "default_time2wait": 2, 00:02:46.742 "default_time2retain": 20, 00:02:46.743 "first_burst_length": 8192, 00:02:46.743 "immediate_data": true, 00:02:46.743 "allow_duplicated_isid": false, 00:02:46.743 "error_recovery_level": 0, 00:02:46.743 "nop_timeout": 60, 00:02:46.743 "nop_in_interval": 30, 00:02:46.743 "disable_chap": false, 00:02:46.743 "require_chap": false, 00:02:46.743 "mutual_chap": false, 00:02:46.743 "chap_group": 0, 00:02:46.743 "max_large_datain_per_connection": 64, 00:02:46.743 "max_r2t_per_connection": 4, 00:02:46.743 "pdu_pool_size": 36864, 00:02:46.743 "immediate_data_pool_size": 16384, 00:02:46.743 "data_out_pool_size": 2048 00:02:46.743 } 00:02:46.743 } 00:02:46.743 ] 00:02:46.743 } 00:02:46.743 ] 00:02:46.743 } 00:02:46.743 13:46:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:02:46.743 13:46:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 602024 00:02:46.743 13:46:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 602024 ']' 00:02:46.743 13:46:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 602024 00:02:46.743 13:46:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:02:46.743 13:46:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:02:46.743 13:46:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 602024 00:02:46.743 13:46:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:02:46.743 13:46:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:02:46.743 13:46:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 602024' 00:02:46.743 killing process with pid 602024 00:02:46.743 13:46:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 602024 00:02:46.743 13:46:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 602024 00:02:46.743 13:46:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=602082 00:02:46.743 13:46:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:02:46.743 13:46:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:02:52.012 13:46:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 602082 00:02:52.012 13:46:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 602082 ']' 00:02:52.012 13:46:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 602082 00:02:52.012 13:46:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:02:52.012 13:46:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:02:52.012 13:46:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 602082 00:02:52.012 13:46:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:02:52.012 13:46:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:02:52.012 13:46:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 602082' 00:02:52.012 killing process with pid 602082 00:02:52.012 13:46:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 602082 00:02:52.012 13:46:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 602082 00:02:52.012 13:46:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:02:52.012 13:46:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:02:52.012 00:02:52.012 real 0m5.940s 00:02:52.012 user 0m5.733s 00:02:52.012 sys 0m0.444s 00:02:52.012 13:46:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:02:52.012 13:46:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:02:52.012 ************************************ 00:02:52.012 END TEST skip_rpc_with_json 00:02:52.012 ************************************ 00:02:52.012 13:46:31 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:02:52.012 13:46:31 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:02:52.012 13:46:31 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:02:52.012 13:46:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:02:52.271 ************************************ 00:02:52.271 START TEST skip_rpc_with_delay 00:02:52.271 ************************************ 00:02:52.271 13:46:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:02:52.271 13:46:31 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:02:52.271 13:46:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:02:52.271 13:46:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:02:52.271 13:46:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:02:52.271 13:46:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:02:52.271 13:46:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:02:52.271 13:46:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:02:52.271 13:46:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:02:52.271 13:46:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:02:52.271 13:46:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:02:52.271 13:46:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:02:52.271 13:46:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:02:52.271 [2024-11-06 13:46:31.345737] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:02:52.271 13:46:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:02:52.271 13:46:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:02:52.271 13:46:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:02:52.271 13:46:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:02:52.271 00:02:52.271 real 0m0.054s 00:02:52.272 user 0m0.032s 00:02:52.272 sys 0m0.021s 00:02:52.272 13:46:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:02:52.272 13:46:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:02:52.272 ************************************ 00:02:52.272 END TEST skip_rpc_with_delay 00:02:52.272 ************************************ 00:02:52.272 13:46:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:02:52.272 13:46:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:02:52.272 13:46:31 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:02:52.272 13:46:31 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:02:52.272 13:46:31 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:02:52.272 13:46:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:02:52.272 ************************************ 00:02:52.272 START TEST exit_on_failed_rpc_init 00:02:52.272 ************************************ 00:02:52.272 13:46:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:02:52.272 13:46:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=603429 00:02:52.272 13:46:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 603429 00:02:52.272 13:46:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 603429 ']' 00:02:52.272 13:46:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:02:52.272 13:46:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:02:52.272 13:46:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:02:52.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:02:52.272 13:46:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:02:52.272 13:46:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:02:52.272 13:46:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:02:52.272 [2024-11-06 13:46:31.447321] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:02:52.272 [2024-11-06 13:46:31.447371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid603429 ] 00:02:52.272 [2024-11-06 13:46:31.514674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:02:52.272 [2024-11-06 13:46:31.549463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:02:52.531 13:46:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:02:52.531 13:46:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:02:52.531 13:46:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:02:52.531 13:46:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:02:52.531 13:46:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:02:52.531 13:46:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:02:52.531 13:46:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:02:52.531 13:46:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:02:52.531 13:46:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:02:52.531 13:46:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:02:52.531 13:46:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:02:52.531 13:46:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:02:52.531 13:46:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:02:52.531 13:46:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:02:52.531 13:46:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:02:52.531 [2024-11-06 13:46:31.757261] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:02:52.531 [2024-11-06 13:46:31.757310] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid603440 ] 00:02:52.790 [2024-11-06 13:46:31.833954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:02:52.790 [2024-11-06 13:46:31.869435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:02:52.790 [2024-11-06 13:46:31.869481] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:02:52.790 [2024-11-06 13:46:31.869491] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:02:52.790 [2024-11-06 13:46:31.869498] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:02:52.790 13:46:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:02:52.790 13:46:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:02:52.790 13:46:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:02:52.790 13:46:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:02:52.790 13:46:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:02:52.790 13:46:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:02:52.790 13:46:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:02:52.790 13:46:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 603429 00:02:52.790 13:46:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 603429 ']' 00:02:52.790 13:46:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 603429 00:02:52.790 13:46:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:02:52.790 13:46:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:02:52.790 13:46:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 603429 00:02:52.790 13:46:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:02:52.790 13:46:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:02:52.790 13:46:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 603429' 00:02:52.790 killing process with pid 603429 00:02:52.790 13:46:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 603429 00:02:52.790 13:46:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 603429 00:02:53.049 00:02:53.049 real 0m0.731s 00:02:53.049 user 0m0.819s 00:02:53.049 sys 0m0.298s 00:02:53.049 13:46:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:02:53.049 13:46:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:02:53.049 ************************************ 00:02:53.049 END TEST exit_on_failed_rpc_init 00:02:53.049 ************************************ 00:02:53.049 13:46:32 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:02:53.049 00:02:53.049 real 0m12.291s 00:02:53.049 user 0m11.716s 00:02:53.049 sys 0m1.240s 00:02:53.049 13:46:32 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:02:53.049 13:46:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:02:53.049 ************************************ 00:02:53.049 END TEST skip_rpc 00:02:53.049 ************************************ 00:02:53.049 13:46:32 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:02:53.049 13:46:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:02:53.049 13:46:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:02:53.049 13:46:32 -- common/autotest_common.sh@10 -- # set +x 00:02:53.049 ************************************ 00:02:53.049 START TEST rpc_client 00:02:53.049 ************************************ 00:02:53.049 13:46:32 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:02:53.049 * Looking for test storage... 00:02:53.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:02:53.049 13:46:32 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:02:53.049 13:46:32 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:02:53.049 13:46:32 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:02:53.049 13:46:32 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:02:53.049 13:46:32 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:53.049 13:46:32 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:53.049 13:46:32 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:53.049 13:46:32 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:02:53.049 13:46:32 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:02:53.049 13:46:32 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:02:53.049 13:46:32 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:02:53.049 13:46:32 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:02:53.049 13:46:32 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:02:53.049 13:46:32 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:02:53.049 13:46:32 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:53.049 13:46:32 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:02:53.049 13:46:32 rpc_client -- scripts/common.sh@345 -- # : 1 00:02:53.049 13:46:32 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:53.049 13:46:32 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:53.049 13:46:32 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:02:53.049 13:46:32 rpc_client -- scripts/common.sh@353 -- # local d=1 00:02:53.049 13:46:32 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:53.049 13:46:32 rpc_client -- scripts/common.sh@355 -- # echo 1 00:02:53.049 13:46:32 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:02:53.049 13:46:32 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:02:53.049 13:46:32 rpc_client -- scripts/common.sh@353 -- # local d=2 00:02:53.049 13:46:32 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:53.049 13:46:32 rpc_client -- scripts/common.sh@355 -- # echo 2 00:02:53.309 13:46:32 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:02:53.309 13:46:32 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:53.309 13:46:32 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:53.309 13:46:32 rpc_client -- scripts/common.sh@368 -- # return 0 00:02:53.309 13:46:32 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:53.309 13:46:32 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:02:53.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:53.309 --rc genhtml_branch_coverage=1 00:02:53.309 --rc genhtml_function_coverage=1 00:02:53.309 --rc genhtml_legend=1 00:02:53.309 --rc geninfo_all_blocks=1 00:02:53.309 --rc geninfo_unexecuted_blocks=1 00:02:53.309 00:02:53.309 ' 00:02:53.309 13:46:32 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:02:53.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:53.309 --rc genhtml_branch_coverage=1 00:02:53.309 --rc genhtml_function_coverage=1 00:02:53.309 --rc genhtml_legend=1 00:02:53.309 --rc geninfo_all_blocks=1 00:02:53.309 --rc geninfo_unexecuted_blocks=1 00:02:53.309 00:02:53.309 ' 00:02:53.309 13:46:32 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:02:53.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:53.309 --rc genhtml_branch_coverage=1 00:02:53.309 --rc genhtml_function_coverage=1 00:02:53.309 --rc genhtml_legend=1 00:02:53.309 --rc geninfo_all_blocks=1 00:02:53.309 --rc geninfo_unexecuted_blocks=1 00:02:53.309 00:02:53.309 ' 00:02:53.309 13:46:32 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:02:53.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:53.309 --rc genhtml_branch_coverage=1 00:02:53.309 --rc genhtml_function_coverage=1 00:02:53.309 --rc genhtml_legend=1 00:02:53.309 --rc geninfo_all_blocks=1 00:02:53.309 --rc geninfo_unexecuted_blocks=1 00:02:53.309 00:02:53.309 ' 00:02:53.309 13:46:32 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:02:53.309 OK 00:02:53.309 13:46:32 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:02:53.309 00:02:53.309 real 0m0.140s 00:02:53.309 user 0m0.078s 00:02:53.309 sys 0m0.068s 00:02:53.309 13:46:32 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:02:53.309 13:46:32 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:02:53.309 ************************************ 00:02:53.309 END TEST rpc_client 00:02:53.309 ************************************ 00:02:53.309 13:46:32 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:02:53.309 13:46:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:02:53.309 13:46:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:02:53.309 13:46:32 -- common/autotest_common.sh@10 -- # set +x 00:02:53.309 ************************************ 00:02:53.309 START TEST json_config 00:02:53.309 ************************************ 00:02:53.309 13:46:32 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:02:53.309 13:46:32 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:02:53.309 13:46:32 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:02:53.309 13:46:32 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:02:53.309 13:46:32 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:02:53.309 13:46:32 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:53.309 13:46:32 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:53.309 13:46:32 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:53.309 13:46:32 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:02:53.309 13:46:32 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:02:53.309 13:46:32 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:02:53.309 13:46:32 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:02:53.309 13:46:32 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:02:53.309 13:46:32 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:02:53.309 13:46:32 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:02:53.309 13:46:32 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:53.309 13:46:32 json_config -- scripts/common.sh@344 -- # case "$op" in 00:02:53.309 13:46:32 json_config -- scripts/common.sh@345 -- # : 1 00:02:53.309 13:46:32 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:53.309 13:46:32 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:53.309 13:46:32 json_config -- scripts/common.sh@365 -- # decimal 1 00:02:53.309 13:46:32 json_config -- scripts/common.sh@353 -- # local d=1 00:02:53.309 13:46:32 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:53.309 13:46:32 json_config -- scripts/common.sh@355 -- # echo 1 00:02:53.309 13:46:32 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:02:53.309 13:46:32 json_config -- scripts/common.sh@366 -- # decimal 2 00:02:53.309 13:46:32 json_config -- scripts/common.sh@353 -- # local d=2 00:02:53.309 13:46:32 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:53.309 13:46:32 json_config -- scripts/common.sh@355 -- # echo 2 00:02:53.309 13:46:32 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:02:53.309 13:46:32 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:53.309 13:46:32 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:53.309 13:46:32 json_config -- scripts/common.sh@368 -- # return 0 00:02:53.309 13:46:32 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:53.310 13:46:32 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:02:53.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:53.310 --rc genhtml_branch_coverage=1 00:02:53.310 --rc genhtml_function_coverage=1 00:02:53.310 --rc genhtml_legend=1 00:02:53.310 --rc geninfo_all_blocks=1 00:02:53.310 --rc geninfo_unexecuted_blocks=1 00:02:53.310 00:02:53.310 ' 00:02:53.310 13:46:32 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:02:53.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:53.310 --rc genhtml_branch_coverage=1 00:02:53.310 --rc genhtml_function_coverage=1 00:02:53.310 --rc genhtml_legend=1 00:02:53.310 --rc geninfo_all_blocks=1 00:02:53.310 --rc geninfo_unexecuted_blocks=1 00:02:53.310 00:02:53.310 ' 00:02:53.310 13:46:32 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:02:53.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:53.310 --rc genhtml_branch_coverage=1 00:02:53.310 --rc genhtml_function_coverage=1 00:02:53.310 --rc genhtml_legend=1 00:02:53.310 --rc geninfo_all_blocks=1 00:02:53.310 --rc geninfo_unexecuted_blocks=1 00:02:53.310 00:02:53.310 ' 00:02:53.310 13:46:32 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:02:53.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:53.310 --rc genhtml_branch_coverage=1 00:02:53.310 --rc genhtml_function_coverage=1 00:02:53.310 --rc genhtml_legend=1 00:02:53.310 --rc geninfo_all_blocks=1 00:02:53.310 --rc geninfo_unexecuted_blocks=1 00:02:53.310 00:02:53.310 ' 00:02:53.310 13:46:32 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:53.310 13:46:32 json_config -- nvmf/common.sh@7 -- # uname -s 00:02:53.310 13:46:32 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:53.310 13:46:32 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:53.310 13:46:32 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:53.310 13:46:32 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:53.310 13:46:32 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:53.310 13:46:32 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:53.310 13:46:32 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:53.310 13:46:32 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:53.310 13:46:32 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:53.310 13:46:32 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:53.310 13:46:32 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:02:53.310 13:46:32 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:02:53.310 13:46:32 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:53.310 13:46:32 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:53.310 13:46:32 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:53.310 13:46:32 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:53.310 13:46:32 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:53.310 13:46:32 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:02:53.310 13:46:32 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:53.310 13:46:32 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:53.310 13:46:32 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:53.310 13:46:32 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.310 13:46:32 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.310 13:46:32 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.310 13:46:32 json_config -- paths/export.sh@5 -- # export PATH 00:02:53.310 13:46:32 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.310 13:46:32 json_config -- nvmf/common.sh@51 -- # : 0 00:02:53.310 13:46:32 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:53.310 13:46:32 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:53.310 13:46:32 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:53.310 13:46:32 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:53.310 13:46:32 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:53.310 13:46:32 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:53.310 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:53.310 13:46:32 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:53.310 13:46:32 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:53.310 13:46:32 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:53.310 13:46:32 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:02:53.310 13:46:32 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:02:53.310 13:46:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:02:53.310 13:46:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:02:53.310 13:46:32 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:02:53.310 13:46:32 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:02:53.310 13:46:32 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:02:53.310 13:46:32 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:02:53.310 13:46:32 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:02:53.310 13:46:32 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:02:53.310 13:46:32 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:02:53.310 13:46:32 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:02:53.310 13:46:32 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:02:53.310 13:46:32 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:02:53.310 13:46:32 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:02:53.310 13:46:32 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:02:53.310 INFO: JSON configuration test init 00:02:53.310 13:46:32 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:02:53.310 13:46:32 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:02:53.310 13:46:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:53.310 13:46:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:02:53.310 13:46:32 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:02:53.310 13:46:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:53.310 13:46:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:02:53.310 13:46:32 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:02:53.310 13:46:32 json_config -- json_config/common.sh@9 -- # local app=target 00:02:53.310 13:46:32 json_config -- json_config/common.sh@10 -- # shift 00:02:53.310 13:46:32 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:02:53.310 13:46:32 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:02:53.310 13:46:32 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:02:53.310 13:46:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:02:53.310 13:46:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:02:53.310 13:46:32 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=603888 00:02:53.310 13:46:32 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:02:53.310 Waiting for target to run... 00:02:53.310 13:46:32 json_config -- json_config/common.sh@25 -- # waitforlisten 603888 /var/tmp/spdk_tgt.sock 00:02:53.310 13:46:32 json_config -- common/autotest_common.sh@833 -- # '[' -z 603888 ']' 00:02:53.310 13:46:32 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:02:53.310 13:46:32 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:02:53.310 13:46:32 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:02:53.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:02:53.310 13:46:32 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:02:53.310 13:46:32 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:02:53.310 13:46:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:02:53.310 [2024-11-06 13:46:32.587296] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:02:53.310 [2024-11-06 13:46:32.587387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid603888 ] 00:02:53.878 [2024-11-06 13:46:32.860661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:02:53.878 [2024-11-06 13:46:32.884365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:02:54.136 13:46:33 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:02:54.136 13:46:33 json_config -- common/autotest_common.sh@866 -- # return 0 00:02:54.136 13:46:33 json_config -- json_config/common.sh@26 -- # echo '' 00:02:54.136 00:02:54.136 13:46:33 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:02:54.136 13:46:33 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:02:54.136 13:46:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:54.136 13:46:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:02:54.136 13:46:33 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:02:54.136 13:46:33 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:02:54.136 13:46:33 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:02:54.136 13:46:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:02:54.136 13:46:33 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:02:54.137 13:46:33 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:02:54.137 13:46:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:02:54.704 13:46:33 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:02:54.704 13:46:33 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:02:54.704 13:46:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:54.704 13:46:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:02:54.704 13:46:33 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:02:54.704 13:46:33 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:02:54.704 13:46:33 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:02:54.704 13:46:33 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:02:54.704 13:46:33 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:02:54.704 13:46:33 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:02:54.704 13:46:33 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:02:54.704 13:46:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:02:54.962 13:46:34 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:02:54.962 13:46:34 json_config -- json_config/json_config.sh@51 -- # local get_types 00:02:54.962 13:46:34 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:02:54.962 13:46:34 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:02:54.962 13:46:34 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:02:54.962 13:46:34 json_config -- json_config/json_config.sh@54 -- # sort 00:02:54.962 13:46:34 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:02:54.962 13:46:34 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:02:54.962 13:46:34 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:02:54.962 13:46:34 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:02:54.962 13:46:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:02:54.962 13:46:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:02:54.962 13:46:34 json_config -- json_config/json_config.sh@62 -- # return 0 00:02:54.962 13:46:34 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:02:54.962 13:46:34 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:02:54.962 13:46:34 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:02:54.962 13:46:34 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:02:54.962 13:46:34 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:02:54.962 13:46:34 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:02:54.962 13:46:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:54.962 13:46:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:02:54.962 13:46:34 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:02:54.962 13:46:34 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:02:54.962 13:46:34 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:02:54.962 13:46:34 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:02:54.962 13:46:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:02:55.220 MallocForNvmf0 00:02:55.220 13:46:34 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:02:55.220 13:46:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:02:55.220 MallocForNvmf1 00:02:55.220 13:46:34 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:02:55.220 13:46:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:02:55.478 [2024-11-06 13:46:34.571342] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:02:55.478 13:46:34 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:02:55.478 13:46:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:02:55.478 13:46:34 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:02:55.478 13:46:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:02:55.736 13:46:34 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:02:55.736 13:46:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:02:55.995 13:46:35 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:02:55.995 13:46:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:02:55.995 [2024-11-06 13:46:35.197254] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:02:55.995 13:46:35 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:02:55.995 13:46:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:02:55.995 13:46:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:02:55.995 13:46:35 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:02:55.995 13:46:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:02:55.995 13:46:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:02:55.995 13:46:35 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:02:55.995 13:46:35 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:02:55.995 13:46:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:02:56.253 MallocBdevForConfigChangeCheck 00:02:56.253 13:46:35 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:02:56.253 13:46:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:02:56.253 13:46:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:02:56.253 13:46:35 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:02:56.253 13:46:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:02:56.511 13:46:35 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:02:56.511 INFO: shutting down applications... 00:02:56.511 13:46:35 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:02:56.511 13:46:35 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:02:56.511 13:46:35 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:02:56.511 13:46:35 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:02:57.078 Calling clear_iscsi_subsystem 00:02:57.078 Calling clear_nvmf_subsystem 00:02:57.078 Calling clear_nbd_subsystem 00:02:57.078 Calling clear_ublk_subsystem 00:02:57.078 Calling clear_vhost_blk_subsystem 00:02:57.078 Calling clear_vhost_scsi_subsystem 00:02:57.078 Calling clear_bdev_subsystem 00:02:57.078 13:46:36 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:02:57.078 13:46:36 json_config -- json_config/json_config.sh@350 -- # count=100 00:02:57.078 13:46:36 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:02:57.078 13:46:36 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:02:57.079 13:46:36 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:02:57.079 13:46:36 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:02:57.337 13:46:36 json_config -- json_config/json_config.sh@352 -- # break 00:02:57.337 13:46:36 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:02:57.337 13:46:36 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:02:57.337 13:46:36 json_config -- json_config/common.sh@31 -- # local app=target 00:02:57.337 13:46:36 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:02:57.337 13:46:36 json_config -- json_config/common.sh@35 -- # [[ -n 603888 ]] 00:02:57.337 13:46:36 json_config -- json_config/common.sh@38 -- # kill -SIGINT 603888 00:02:57.337 13:46:36 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:02:57.337 13:46:36 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:02:57.337 13:46:36 json_config -- json_config/common.sh@41 -- # kill -0 603888 00:02:57.337 13:46:36 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:02:57.906 13:46:36 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:02:57.906 13:46:36 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:02:57.906 13:46:37 json_config -- json_config/common.sh@41 -- # kill -0 603888 00:02:57.906 13:46:37 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:02:57.906 13:46:37 json_config -- json_config/common.sh@43 -- # break 00:02:57.906 13:46:37 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:02:57.906 13:46:37 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:02:57.906 SPDK target shutdown done 00:02:57.906 13:46:37 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:02:57.906 INFO: relaunching applications... 00:02:57.906 13:46:37 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:02:57.906 13:46:37 json_config -- json_config/common.sh@9 -- # local app=target 00:02:57.906 13:46:37 json_config -- json_config/common.sh@10 -- # shift 00:02:57.906 13:46:37 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:02:57.906 13:46:37 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:02:57.906 13:46:37 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:02:57.906 13:46:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:02:57.906 13:46:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:02:57.906 13:46:37 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=605024 00:02:57.906 13:46:37 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:02:57.906 Waiting for target to run... 00:02:57.906 13:46:37 json_config -- json_config/common.sh@25 -- # waitforlisten 605024 /var/tmp/spdk_tgt.sock 00:02:57.906 13:46:37 json_config -- common/autotest_common.sh@833 -- # '[' -z 605024 ']' 00:02:57.906 13:46:37 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:02:57.906 13:46:37 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:02:57.906 13:46:37 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:02:57.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:02:57.906 13:46:37 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:02:57.906 13:46:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:02:57.906 13:46:37 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:02:57.906 [2024-11-06 13:46:37.044134] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:02:57.906 [2024-11-06 13:46:37.044197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid605024 ] 00:02:58.165 [2024-11-06 13:46:37.341802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:02:58.165 [2024-11-06 13:46:37.366060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:02:58.734 [2024-11-06 13:46:37.874141] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:02:58.734 [2024-11-06 13:46:37.906481] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:02:58.734 13:46:37 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:02:58.734 13:46:37 json_config -- common/autotest_common.sh@866 -- # return 0 00:02:58.734 13:46:37 json_config -- json_config/common.sh@26 -- # echo '' 00:02:58.734 00:02:58.734 13:46:37 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:02:58.734 13:46:37 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:02:58.734 INFO: Checking if target configuration is the same... 00:02:58.734 13:46:37 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:02:58.734 13:46:37 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:02:58.734 13:46:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:02:58.734 + '[' 2 -ne 2 ']' 00:02:58.734 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:02:58.734 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:02:58.734 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:58.734 +++ basename /dev/fd/62 00:02:58.734 ++ mktemp /tmp/62.XXX 00:02:58.734 + tmp_file_1=/tmp/62.ps8 00:02:58.734 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:02:58.734 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:02:58.734 + tmp_file_2=/tmp/spdk_tgt_config.json.3IX 00:02:58.734 + ret=0 00:02:58.734 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:02:58.993 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:02:58.993 + diff -u /tmp/62.ps8 /tmp/spdk_tgt_config.json.3IX 00:02:58.993 + echo 'INFO: JSON config files are the same' 00:02:58.993 INFO: JSON config files are the same 00:02:58.993 + rm /tmp/62.ps8 /tmp/spdk_tgt_config.json.3IX 00:02:58.993 + exit 0 00:02:58.993 13:46:38 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:02:58.993 13:46:38 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:02:58.993 INFO: changing configuration and checking if this can be detected... 00:02:58.993 13:46:38 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:02:58.993 13:46:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:02:59.252 13:46:38 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:02:59.252 13:46:38 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:02:59.252 13:46:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:02:59.252 + '[' 2 -ne 2 ']' 00:02:59.252 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:02:59.252 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:02:59.252 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:59.252 +++ basename /dev/fd/62 00:02:59.252 ++ mktemp /tmp/62.XXX 00:02:59.252 + tmp_file_1=/tmp/62.C7N 00:02:59.252 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:02:59.252 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:02:59.252 + tmp_file_2=/tmp/spdk_tgt_config.json.uvX 00:02:59.252 + ret=0 00:02:59.252 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:02:59.511 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:02:59.511 + diff -u /tmp/62.C7N /tmp/spdk_tgt_config.json.uvX 00:02:59.511 + ret=1 00:02:59.511 + echo '=== Start of file: /tmp/62.C7N ===' 00:02:59.511 + cat /tmp/62.C7N 00:02:59.511 + echo '=== End of file: /tmp/62.C7N ===' 00:02:59.511 + echo '' 00:02:59.511 + echo '=== Start of file: /tmp/spdk_tgt_config.json.uvX ===' 00:02:59.511 + cat /tmp/spdk_tgt_config.json.uvX 00:02:59.511 + echo '=== End of file: /tmp/spdk_tgt_config.json.uvX ===' 00:02:59.511 + echo '' 00:02:59.511 + rm /tmp/62.C7N /tmp/spdk_tgt_config.json.uvX 00:02:59.511 + exit 1 00:02:59.511 13:46:38 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:02:59.511 INFO: configuration change detected. 00:02:59.511 13:46:38 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:02:59.511 13:46:38 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:02:59.511 13:46:38 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:59.511 13:46:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:02:59.511 13:46:38 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:02:59.511 13:46:38 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:02:59.511 13:46:38 json_config -- json_config/json_config.sh@324 -- # [[ -n 605024 ]] 00:02:59.511 13:46:38 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:02:59.511 13:46:38 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:02:59.511 13:46:38 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:59.511 13:46:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:02:59.512 13:46:38 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:02:59.512 13:46:38 json_config -- json_config/json_config.sh@200 -- # uname -s 00:02:59.512 13:46:38 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:02:59.512 13:46:38 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:02:59.512 13:46:38 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:02:59.512 13:46:38 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:02:59.512 13:46:38 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:02:59.512 13:46:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:02:59.771 13:46:38 json_config -- json_config/json_config.sh@330 -- # killprocess 605024 00:02:59.771 13:46:38 json_config -- common/autotest_common.sh@952 -- # '[' -z 605024 ']' 00:02:59.771 13:46:38 json_config -- common/autotest_common.sh@956 -- # kill -0 605024 00:02:59.771 13:46:38 json_config -- common/autotest_common.sh@957 -- # uname 00:02:59.771 13:46:38 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:02:59.771 13:46:38 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 605024 00:02:59.771 13:46:38 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:02:59.771 13:46:38 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:02:59.771 13:46:38 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 605024' 00:02:59.771 killing process with pid 605024 00:02:59.771 13:46:38 json_config -- common/autotest_common.sh@971 -- # kill 605024 00:02:59.771 13:46:38 json_config -- common/autotest_common.sh@976 -- # wait 605024 00:03:00.031 13:46:39 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:00.031 13:46:39 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:03:00.031 13:46:39 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:00.031 13:46:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:00.031 13:46:39 json_config -- json_config/json_config.sh@335 -- # return 0 00:03:00.031 13:46:39 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:03:00.031 INFO: Success 00:03:00.031 00:03:00.031 real 0m6.722s 00:03:00.031 user 0m7.924s 00:03:00.031 sys 0m1.600s 00:03:00.031 13:46:39 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:00.031 13:46:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:00.031 ************************************ 00:03:00.031 END TEST json_config 00:03:00.031 ************************************ 00:03:00.031 13:46:39 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:00.031 13:46:39 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:00.031 13:46:39 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:00.031 13:46:39 -- common/autotest_common.sh@10 -- # set +x 00:03:00.031 ************************************ 00:03:00.031 START TEST json_config_extra_key 00:03:00.031 ************************************ 00:03:00.031 13:46:39 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:00.031 13:46:39 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:00.031 13:46:39 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:03:00.031 13:46:39 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:00.031 13:46:39 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:00.031 13:46:39 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:00.031 13:46:39 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:00.031 13:46:39 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:00.031 13:46:39 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:03:00.031 13:46:39 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:03:00.031 13:46:39 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:03:00.031 13:46:39 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:03:00.031 13:46:39 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:03:00.031 13:46:39 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:03:00.031 13:46:39 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:03:00.031 13:46:39 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:00.031 13:46:39 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:03:00.031 13:46:39 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:03:00.031 13:46:39 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:00.031 13:46:39 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:00.031 13:46:39 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:03:00.031 13:46:39 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:03:00.031 13:46:39 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:00.031 13:46:39 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:03:00.031 13:46:39 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:03:00.031 13:46:39 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:03:00.031 13:46:39 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:03:00.031 13:46:39 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:00.031 13:46:39 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:03:00.031 13:46:39 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:03:00.031 13:46:39 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:00.031 13:46:39 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:00.031 13:46:39 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:03:00.031 13:46:39 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:00.031 13:46:39 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:00.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.031 --rc genhtml_branch_coverage=1 00:03:00.031 --rc genhtml_function_coverage=1 00:03:00.031 --rc genhtml_legend=1 00:03:00.031 --rc geninfo_all_blocks=1 00:03:00.031 --rc geninfo_unexecuted_blocks=1 00:03:00.031 00:03:00.031 ' 00:03:00.031 13:46:39 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:00.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.031 --rc genhtml_branch_coverage=1 00:03:00.031 --rc genhtml_function_coverage=1 00:03:00.031 --rc genhtml_legend=1 00:03:00.032 --rc geninfo_all_blocks=1 00:03:00.032 --rc geninfo_unexecuted_blocks=1 00:03:00.032 00:03:00.032 ' 00:03:00.032 13:46:39 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:00.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.032 --rc genhtml_branch_coverage=1 00:03:00.032 --rc genhtml_function_coverage=1 00:03:00.032 --rc genhtml_legend=1 00:03:00.032 --rc geninfo_all_blocks=1 00:03:00.032 --rc geninfo_unexecuted_blocks=1 00:03:00.032 00:03:00.032 ' 00:03:00.032 13:46:39 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:00.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.032 --rc genhtml_branch_coverage=1 00:03:00.032 --rc genhtml_function_coverage=1 00:03:00.032 --rc genhtml_legend=1 00:03:00.032 --rc geninfo_all_blocks=1 00:03:00.032 --rc geninfo_unexecuted_blocks=1 00:03:00.032 00:03:00.032 ' 00:03:00.032 13:46:39 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:00.032 13:46:39 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:03:00.032 13:46:39 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:00.032 13:46:39 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:00.032 13:46:39 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:00.032 13:46:39 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:00.032 13:46:39 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:00.032 13:46:39 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:00.032 13:46:39 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:00.032 13:46:39 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:00.032 13:46:39 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:00.032 13:46:39 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:00.032 13:46:39 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:03:00.032 13:46:39 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:03:00.032 13:46:39 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:00.032 13:46:39 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:00.032 13:46:39 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:00.032 13:46:39 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:00.032 13:46:39 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:00.032 13:46:39 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:03:00.032 13:46:39 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:00.032 13:46:39 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:00.032 13:46:39 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:00.032 13:46:39 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.032 13:46:39 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.032 13:46:39 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.032 13:46:39 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:03:00.032 13:46:39 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.032 13:46:39 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:03:00.032 13:46:39 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:00.032 13:46:39 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:00.032 13:46:39 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:00.032 13:46:39 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:00.032 13:46:39 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:00.032 13:46:39 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:00.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:00.032 13:46:39 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:00.032 13:46:39 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:00.032 13:46:39 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:00.032 13:46:39 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:00.032 13:46:39 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:03:00.032 13:46:39 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:03:00.032 13:46:39 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:00.032 13:46:39 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:03:00.032 13:46:39 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:00.032 13:46:39 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:03:00.032 13:46:39 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:03:00.032 13:46:39 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:03:00.032 13:46:39 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:00.032 13:46:39 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:03:00.032 INFO: launching applications... 00:03:00.032 13:46:39 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:00.032 13:46:39 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:03:00.032 13:46:39 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:03:00.032 13:46:39 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:00.032 13:46:39 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:00.032 13:46:39 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:03:00.032 13:46:39 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:00.032 13:46:39 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:00.032 13:46:39 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=605490 00:03:00.032 13:46:39 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:00.032 Waiting for target to run... 00:03:00.032 13:46:39 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 605490 /var/tmp/spdk_tgt.sock 00:03:00.032 13:46:39 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 605490 ']' 00:03:00.032 13:46:39 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:00.032 13:46:39 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:00.032 13:46:39 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:00.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:00.032 13:46:39 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:00.032 13:46:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:00.032 13:46:39 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:00.292 [2024-11-06 13:46:39.334822] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:03:00.292 [2024-11-06 13:46:39.334884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid605490 ] 00:03:00.553 [2024-11-06 13:46:39.753226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:00.553 [2024-11-06 13:46:39.785464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:01.122 13:46:40 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:01.122 13:46:40 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:03:01.122 13:46:40 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:03:01.122 00:03:01.122 13:46:40 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:03:01.122 INFO: shutting down applications... 00:03:01.122 13:46:40 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:03:01.122 13:46:40 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:03:01.122 13:46:40 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:01.122 13:46:40 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 605490 ]] 00:03:01.122 13:46:40 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 605490 00:03:01.122 13:46:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:01.122 13:46:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:01.122 13:46:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 605490 00:03:01.122 13:46:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:03:01.381 13:46:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:03:01.381 13:46:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:01.381 13:46:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 605490 00:03:01.381 13:46:40 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:01.381 13:46:40 json_config_extra_key -- json_config/common.sh@43 -- # break 00:03:01.381 13:46:40 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:01.381 13:46:40 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:01.381 SPDK target shutdown done 00:03:01.381 13:46:40 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:03:01.381 Success 00:03:01.381 00:03:01.381 real 0m1.455s 00:03:01.381 user 0m0.954s 00:03:01.381 sys 0m0.495s 00:03:01.381 13:46:40 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:01.381 13:46:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:01.381 ************************************ 00:03:01.381 END TEST json_config_extra_key 00:03:01.381 ************************************ 00:03:01.381 13:46:40 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:01.381 13:46:40 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:01.381 13:46:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:01.381 13:46:40 -- common/autotest_common.sh@10 -- # set +x 00:03:01.641 ************************************ 00:03:01.641 START TEST alias_rpc 00:03:01.641 ************************************ 00:03:01.641 13:46:40 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:01.641 * Looking for test storage... 00:03:01.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:03:01.641 13:46:40 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:01.641 13:46:40 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:01.641 13:46:40 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:01.641 13:46:40 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:01.641 13:46:40 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:01.641 13:46:40 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:01.641 13:46:40 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:01.641 13:46:40 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:01.641 13:46:40 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:01.641 13:46:40 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:01.641 13:46:40 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:01.641 13:46:40 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:01.641 13:46:40 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:01.641 13:46:40 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:01.641 13:46:40 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:01.641 13:46:40 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:01.641 13:46:40 alias_rpc -- scripts/common.sh@345 -- # : 1 00:03:01.641 13:46:40 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:01.641 13:46:40 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:01.641 13:46:40 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:01.641 13:46:40 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:03:01.641 13:46:40 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:01.641 13:46:40 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:03:01.641 13:46:40 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:01.641 13:46:40 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:01.641 13:46:40 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:03:01.641 13:46:40 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:01.641 13:46:40 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:03:01.641 13:46:40 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:01.641 13:46:40 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:01.641 13:46:40 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:01.641 13:46:40 alias_rpc -- scripts/common.sh@368 -- # return 0 00:03:01.641 13:46:40 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:01.641 13:46:40 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:01.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:01.641 --rc genhtml_branch_coverage=1 00:03:01.641 --rc genhtml_function_coverage=1 00:03:01.641 --rc genhtml_legend=1 00:03:01.641 --rc geninfo_all_blocks=1 00:03:01.641 --rc geninfo_unexecuted_blocks=1 00:03:01.641 00:03:01.641 ' 00:03:01.641 13:46:40 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:01.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:01.641 --rc genhtml_branch_coverage=1 00:03:01.641 --rc genhtml_function_coverage=1 00:03:01.641 --rc genhtml_legend=1 00:03:01.641 --rc geninfo_all_blocks=1 00:03:01.641 --rc geninfo_unexecuted_blocks=1 00:03:01.641 00:03:01.641 ' 00:03:01.641 13:46:40 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:01.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:01.641 --rc genhtml_branch_coverage=1 00:03:01.641 --rc genhtml_function_coverage=1 00:03:01.641 --rc genhtml_legend=1 00:03:01.641 --rc geninfo_all_blocks=1 00:03:01.641 --rc geninfo_unexecuted_blocks=1 00:03:01.641 00:03:01.641 ' 00:03:01.641 13:46:40 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:01.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:01.641 --rc genhtml_branch_coverage=1 00:03:01.641 --rc genhtml_function_coverage=1 00:03:01.641 --rc genhtml_legend=1 00:03:01.641 --rc geninfo_all_blocks=1 00:03:01.641 --rc geninfo_unexecuted_blocks=1 00:03:01.641 00:03:01.641 ' 00:03:01.641 13:46:40 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:03:01.641 13:46:40 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=605882 00:03:01.641 13:46:40 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 605882 00:03:01.641 13:46:40 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 605882 ']' 00:03:01.641 13:46:40 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:01.641 13:46:40 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:01.641 13:46:40 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:01.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:01.641 13:46:40 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:01.641 13:46:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:01.641 13:46:40 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:01.641 [2024-11-06 13:46:40.844852] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:03:01.641 [2024-11-06 13:46:40.844912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid605882 ] 00:03:01.641 [2024-11-06 13:46:40.914310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:01.901 [2024-11-06 13:46:40.952736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:02.469 13:46:41 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:02.469 13:46:41 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:03:02.469 13:46:41 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:03:02.728 13:46:41 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 605882 00:03:02.728 13:46:41 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 605882 ']' 00:03:02.728 13:46:41 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 605882 00:03:02.728 13:46:41 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:03:02.728 13:46:41 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:02.728 13:46:41 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 605882 00:03:02.728 13:46:41 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:02.728 13:46:41 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:02.728 13:46:41 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 605882' 00:03:02.728 killing process with pid 605882 00:03:02.728 13:46:41 alias_rpc -- common/autotest_common.sh@971 -- # kill 605882 00:03:02.728 13:46:41 alias_rpc -- common/autotest_common.sh@976 -- # wait 605882 00:03:02.988 00:03:02.988 real 0m1.351s 00:03:02.988 user 0m1.501s 00:03:02.988 sys 0m0.336s 00:03:02.988 13:46:42 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:02.988 13:46:42 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:02.988 ************************************ 00:03:02.988 END TEST alias_rpc 00:03:02.988 ************************************ 00:03:02.988 13:46:42 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:03:02.988 13:46:42 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:03:02.988 13:46:42 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:02.988 13:46:42 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:02.988 13:46:42 -- common/autotest_common.sh@10 -- # set +x 00:03:02.988 ************************************ 00:03:02.988 START TEST spdkcli_tcp 00:03:02.988 ************************************ 00:03:02.988 13:46:42 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:03:02.988 * Looking for test storage... 00:03:02.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:03:02.988 13:46:42 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:02.988 13:46:42 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:03:02.988 13:46:42 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:02.988 13:46:42 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:02.988 13:46:42 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:02.988 13:46:42 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:02.988 13:46:42 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:02.988 13:46:42 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:03:02.988 13:46:42 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:03:02.988 13:46:42 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:03:02.988 13:46:42 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:03:02.988 13:46:42 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:03:02.988 13:46:42 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:03:02.988 13:46:42 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:03:02.988 13:46:42 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:02.988 13:46:42 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:03:02.988 13:46:42 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:03:02.988 13:46:42 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:02.988 13:46:42 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:02.988 13:46:42 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:03:02.988 13:46:42 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:03:02.988 13:46:42 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:02.988 13:46:42 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:03:02.988 13:46:42 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:03:02.988 13:46:42 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:03:02.988 13:46:42 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:03:02.988 13:46:42 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:02.988 13:46:42 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:03:02.988 13:46:42 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:03:02.988 13:46:42 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:02.988 13:46:42 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:02.988 13:46:42 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:03:02.988 13:46:42 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:02.988 13:46:42 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:02.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:02.988 --rc genhtml_branch_coverage=1 00:03:02.988 --rc genhtml_function_coverage=1 00:03:02.988 --rc genhtml_legend=1 00:03:02.988 --rc geninfo_all_blocks=1 00:03:02.988 --rc geninfo_unexecuted_blocks=1 00:03:02.988 00:03:02.988 ' 00:03:02.988 13:46:42 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:02.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:02.988 --rc genhtml_branch_coverage=1 00:03:02.988 --rc genhtml_function_coverage=1 00:03:02.988 --rc genhtml_legend=1 00:03:02.988 --rc geninfo_all_blocks=1 00:03:02.988 --rc geninfo_unexecuted_blocks=1 00:03:02.988 00:03:02.988 ' 00:03:02.988 13:46:42 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:02.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:02.988 --rc genhtml_branch_coverage=1 00:03:02.988 --rc genhtml_function_coverage=1 00:03:02.988 --rc genhtml_legend=1 00:03:02.988 --rc geninfo_all_blocks=1 00:03:02.988 --rc geninfo_unexecuted_blocks=1 00:03:02.988 00:03:02.988 ' 00:03:02.988 13:46:42 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:02.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:02.988 --rc genhtml_branch_coverage=1 00:03:02.988 --rc genhtml_function_coverage=1 00:03:02.988 --rc genhtml_legend=1 00:03:02.988 --rc geninfo_all_blocks=1 00:03:02.988 --rc geninfo_unexecuted_blocks=1 00:03:02.988 00:03:02.988 ' 00:03:02.988 13:46:42 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:03:02.988 13:46:42 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:03:02.988 13:46:42 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:03:02.988 13:46:42 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:03:02.988 13:46:42 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:03:02.988 13:46:42 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:03:02.988 13:46:42 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:03:02.988 13:46:42 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:02.988 13:46:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:02.988 13:46:42 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=606278 00:03:02.988 13:46:42 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 606278 00:03:02.988 13:46:42 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 606278 ']' 00:03:02.988 13:46:42 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:02.988 13:46:42 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:02.988 13:46:42 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:02.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:02.988 13:46:42 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:02.988 13:46:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:02.988 13:46:42 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:03:02.988 [2024-11-06 13:46:42.244444] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:03:02.988 [2024-11-06 13:46:42.244515] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid606278 ] 00:03:03.248 [2024-11-06 13:46:42.311930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:03.248 [2024-11-06 13:46:42.342822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:03.248 [2024-11-06 13:46:42.342822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:03.248 13:46:42 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:03.248 13:46:42 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:03:03.248 13:46:42 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=606330 00:03:03.248 13:46:42 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:03:03.248 13:46:42 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:03:03.507 [ 00:03:03.507 "bdev_malloc_delete", 00:03:03.507 "bdev_malloc_create", 00:03:03.507 "bdev_null_resize", 00:03:03.507 "bdev_null_delete", 00:03:03.507 "bdev_null_create", 00:03:03.507 "bdev_nvme_cuse_unregister", 00:03:03.507 "bdev_nvme_cuse_register", 00:03:03.507 "bdev_opal_new_user", 00:03:03.507 "bdev_opal_set_lock_state", 00:03:03.507 "bdev_opal_delete", 00:03:03.507 "bdev_opal_get_info", 00:03:03.507 "bdev_opal_create", 00:03:03.507 "bdev_nvme_opal_revert", 00:03:03.507 "bdev_nvme_opal_init", 00:03:03.507 "bdev_nvme_send_cmd", 00:03:03.507 "bdev_nvme_set_keys", 00:03:03.507 "bdev_nvme_get_path_iostat", 00:03:03.507 "bdev_nvme_get_mdns_discovery_info", 00:03:03.507 "bdev_nvme_stop_mdns_discovery", 00:03:03.507 "bdev_nvme_start_mdns_discovery", 00:03:03.507 "bdev_nvme_set_multipath_policy", 00:03:03.507 "bdev_nvme_set_preferred_path", 00:03:03.507 "bdev_nvme_get_io_paths", 00:03:03.507 "bdev_nvme_remove_error_injection", 00:03:03.507 "bdev_nvme_add_error_injection", 00:03:03.507 "bdev_nvme_get_discovery_info", 00:03:03.507 "bdev_nvme_stop_discovery", 00:03:03.507 "bdev_nvme_start_discovery", 00:03:03.507 "bdev_nvme_get_controller_health_info", 00:03:03.507 "bdev_nvme_disable_controller", 00:03:03.507 "bdev_nvme_enable_controller", 00:03:03.507 "bdev_nvme_reset_controller", 00:03:03.507 "bdev_nvme_get_transport_statistics", 00:03:03.507 "bdev_nvme_apply_firmware", 00:03:03.508 "bdev_nvme_detach_controller", 00:03:03.508 "bdev_nvme_get_controllers", 00:03:03.508 "bdev_nvme_attach_controller", 00:03:03.508 "bdev_nvme_set_hotplug", 00:03:03.508 "bdev_nvme_set_options", 00:03:03.508 "bdev_passthru_delete", 00:03:03.508 "bdev_passthru_create", 00:03:03.508 "bdev_lvol_set_parent_bdev", 00:03:03.508 "bdev_lvol_set_parent", 00:03:03.508 "bdev_lvol_check_shallow_copy", 00:03:03.508 "bdev_lvol_start_shallow_copy", 00:03:03.508 "bdev_lvol_grow_lvstore", 00:03:03.508 "bdev_lvol_get_lvols", 00:03:03.508 "bdev_lvol_get_lvstores", 00:03:03.508 "bdev_lvol_delete", 00:03:03.508 "bdev_lvol_set_read_only", 00:03:03.508 "bdev_lvol_resize", 00:03:03.508 "bdev_lvol_decouple_parent", 00:03:03.508 "bdev_lvol_inflate", 00:03:03.508 "bdev_lvol_rename", 00:03:03.508 "bdev_lvol_clone_bdev", 00:03:03.508 "bdev_lvol_clone", 00:03:03.508 "bdev_lvol_snapshot", 00:03:03.508 "bdev_lvol_create", 00:03:03.508 "bdev_lvol_delete_lvstore", 00:03:03.508 "bdev_lvol_rename_lvstore", 00:03:03.508 "bdev_lvol_create_lvstore", 00:03:03.508 "bdev_raid_set_options", 00:03:03.508 "bdev_raid_remove_base_bdev", 00:03:03.508 "bdev_raid_add_base_bdev", 00:03:03.508 "bdev_raid_delete", 00:03:03.508 "bdev_raid_create", 00:03:03.508 "bdev_raid_get_bdevs", 00:03:03.508 "bdev_error_inject_error", 00:03:03.508 "bdev_error_delete", 00:03:03.508 "bdev_error_create", 00:03:03.508 "bdev_split_delete", 00:03:03.508 "bdev_split_create", 00:03:03.508 "bdev_delay_delete", 00:03:03.508 "bdev_delay_create", 00:03:03.508 "bdev_delay_update_latency", 00:03:03.508 "bdev_zone_block_delete", 00:03:03.508 "bdev_zone_block_create", 00:03:03.508 "blobfs_create", 00:03:03.508 "blobfs_detect", 00:03:03.508 "blobfs_set_cache_size", 00:03:03.508 "bdev_aio_delete", 00:03:03.508 "bdev_aio_rescan", 00:03:03.508 "bdev_aio_create", 00:03:03.508 "bdev_ftl_set_property", 00:03:03.508 "bdev_ftl_get_properties", 00:03:03.508 "bdev_ftl_get_stats", 00:03:03.508 "bdev_ftl_unmap", 00:03:03.508 "bdev_ftl_unload", 00:03:03.508 "bdev_ftl_delete", 00:03:03.508 "bdev_ftl_load", 00:03:03.508 "bdev_ftl_create", 00:03:03.508 "bdev_virtio_attach_controller", 00:03:03.508 "bdev_virtio_scsi_get_devices", 00:03:03.508 "bdev_virtio_detach_controller", 00:03:03.508 "bdev_virtio_blk_set_hotplug", 00:03:03.508 "bdev_iscsi_delete", 00:03:03.508 "bdev_iscsi_create", 00:03:03.508 "bdev_iscsi_set_options", 00:03:03.508 "accel_error_inject_error", 00:03:03.508 "ioat_scan_accel_module", 00:03:03.508 "dsa_scan_accel_module", 00:03:03.508 "iaa_scan_accel_module", 00:03:03.508 "vfu_virtio_create_fs_endpoint", 00:03:03.508 "vfu_virtio_create_scsi_endpoint", 00:03:03.508 "vfu_virtio_scsi_remove_target", 00:03:03.508 "vfu_virtio_scsi_add_target", 00:03:03.508 "vfu_virtio_create_blk_endpoint", 00:03:03.508 "vfu_virtio_delete_endpoint", 00:03:03.508 "keyring_file_remove_key", 00:03:03.508 "keyring_file_add_key", 00:03:03.508 "keyring_linux_set_options", 00:03:03.508 "fsdev_aio_delete", 00:03:03.508 "fsdev_aio_create", 00:03:03.508 "iscsi_get_histogram", 00:03:03.508 "iscsi_enable_histogram", 00:03:03.508 "iscsi_set_options", 00:03:03.508 "iscsi_get_auth_groups", 00:03:03.508 "iscsi_auth_group_remove_secret", 00:03:03.508 "iscsi_auth_group_add_secret", 00:03:03.508 "iscsi_delete_auth_group", 00:03:03.508 "iscsi_create_auth_group", 00:03:03.508 "iscsi_set_discovery_auth", 00:03:03.508 "iscsi_get_options", 00:03:03.508 "iscsi_target_node_request_logout", 00:03:03.508 "iscsi_target_node_set_redirect", 00:03:03.508 "iscsi_target_node_set_auth", 00:03:03.508 "iscsi_target_node_add_lun", 00:03:03.508 "iscsi_get_stats", 00:03:03.508 "iscsi_get_connections", 00:03:03.508 "iscsi_portal_group_set_auth", 00:03:03.508 "iscsi_start_portal_group", 00:03:03.508 "iscsi_delete_portal_group", 00:03:03.508 "iscsi_create_portal_group", 00:03:03.508 "iscsi_get_portal_groups", 00:03:03.508 "iscsi_delete_target_node", 00:03:03.508 "iscsi_target_node_remove_pg_ig_maps", 00:03:03.508 "iscsi_target_node_add_pg_ig_maps", 00:03:03.508 "iscsi_create_target_node", 00:03:03.508 "iscsi_get_target_nodes", 00:03:03.508 "iscsi_delete_initiator_group", 00:03:03.508 "iscsi_initiator_group_remove_initiators", 00:03:03.508 "iscsi_initiator_group_add_initiators", 00:03:03.508 "iscsi_create_initiator_group", 00:03:03.508 "iscsi_get_initiator_groups", 00:03:03.508 "nvmf_set_crdt", 00:03:03.508 "nvmf_set_config", 00:03:03.508 "nvmf_set_max_subsystems", 00:03:03.508 "nvmf_stop_mdns_prr", 00:03:03.508 "nvmf_publish_mdns_prr", 00:03:03.508 "nvmf_subsystem_get_listeners", 00:03:03.508 "nvmf_subsystem_get_qpairs", 00:03:03.508 "nvmf_subsystem_get_controllers", 00:03:03.508 "nvmf_get_stats", 00:03:03.508 "nvmf_get_transports", 00:03:03.508 "nvmf_create_transport", 00:03:03.508 "nvmf_get_targets", 00:03:03.508 "nvmf_delete_target", 00:03:03.508 "nvmf_create_target", 00:03:03.508 "nvmf_subsystem_allow_any_host", 00:03:03.508 "nvmf_subsystem_set_keys", 00:03:03.508 "nvmf_subsystem_remove_host", 00:03:03.508 "nvmf_subsystem_add_host", 00:03:03.508 "nvmf_ns_remove_host", 00:03:03.508 "nvmf_ns_add_host", 00:03:03.508 "nvmf_subsystem_remove_ns", 00:03:03.508 "nvmf_subsystem_set_ns_ana_group", 00:03:03.508 "nvmf_subsystem_add_ns", 00:03:03.508 "nvmf_subsystem_listener_set_ana_state", 00:03:03.508 "nvmf_discovery_get_referrals", 00:03:03.508 "nvmf_discovery_remove_referral", 00:03:03.508 "nvmf_discovery_add_referral", 00:03:03.508 "nvmf_subsystem_remove_listener", 00:03:03.508 "nvmf_subsystem_add_listener", 00:03:03.508 "nvmf_delete_subsystem", 00:03:03.508 "nvmf_create_subsystem", 00:03:03.508 "nvmf_get_subsystems", 00:03:03.508 "env_dpdk_get_mem_stats", 00:03:03.508 "nbd_get_disks", 00:03:03.508 "nbd_stop_disk", 00:03:03.508 "nbd_start_disk", 00:03:03.508 "ublk_recover_disk", 00:03:03.508 "ublk_get_disks", 00:03:03.508 "ublk_stop_disk", 00:03:03.508 "ublk_start_disk", 00:03:03.508 "ublk_destroy_target", 00:03:03.508 "ublk_create_target", 00:03:03.508 "virtio_blk_create_transport", 00:03:03.508 "virtio_blk_get_transports", 00:03:03.508 "vhost_controller_set_coalescing", 00:03:03.508 "vhost_get_controllers", 00:03:03.508 "vhost_delete_controller", 00:03:03.508 "vhost_create_blk_controller", 00:03:03.508 "vhost_scsi_controller_remove_target", 00:03:03.508 "vhost_scsi_controller_add_target", 00:03:03.508 "vhost_start_scsi_controller", 00:03:03.508 "vhost_create_scsi_controller", 00:03:03.508 "thread_set_cpumask", 00:03:03.508 "scheduler_set_options", 00:03:03.508 "framework_get_governor", 00:03:03.508 "framework_get_scheduler", 00:03:03.508 "framework_set_scheduler", 00:03:03.508 "framework_get_reactors", 00:03:03.508 "thread_get_io_channels", 00:03:03.508 "thread_get_pollers", 00:03:03.508 "thread_get_stats", 00:03:03.508 "framework_monitor_context_switch", 00:03:03.508 "spdk_kill_instance", 00:03:03.508 "log_enable_timestamps", 00:03:03.508 "log_get_flags", 00:03:03.508 "log_clear_flag", 00:03:03.508 "log_set_flag", 00:03:03.508 "log_get_level", 00:03:03.508 "log_set_level", 00:03:03.508 "log_get_print_level", 00:03:03.508 "log_set_print_level", 00:03:03.508 "framework_enable_cpumask_locks", 00:03:03.508 "framework_disable_cpumask_locks", 00:03:03.508 "framework_wait_init", 00:03:03.508 "framework_start_init", 00:03:03.508 "scsi_get_devices", 00:03:03.508 "bdev_get_histogram", 00:03:03.508 "bdev_enable_histogram", 00:03:03.508 "bdev_set_qos_limit", 00:03:03.508 "bdev_set_qd_sampling_period", 00:03:03.508 "bdev_get_bdevs", 00:03:03.508 "bdev_reset_iostat", 00:03:03.508 "bdev_get_iostat", 00:03:03.508 "bdev_examine", 00:03:03.508 "bdev_wait_for_examine", 00:03:03.509 "bdev_set_options", 00:03:03.509 "accel_get_stats", 00:03:03.509 "accel_set_options", 00:03:03.509 "accel_set_driver", 00:03:03.509 "accel_crypto_key_destroy", 00:03:03.509 "accel_crypto_keys_get", 00:03:03.509 "accel_crypto_key_create", 00:03:03.509 "accel_assign_opc", 00:03:03.509 "accel_get_module_info", 00:03:03.509 "accel_get_opc_assignments", 00:03:03.509 "vmd_rescan", 00:03:03.509 "vmd_remove_device", 00:03:03.509 "vmd_enable", 00:03:03.509 "sock_get_default_impl", 00:03:03.509 "sock_set_default_impl", 00:03:03.509 "sock_impl_set_options", 00:03:03.509 "sock_impl_get_options", 00:03:03.509 "iobuf_get_stats", 00:03:03.509 "iobuf_set_options", 00:03:03.509 "keyring_get_keys", 00:03:03.509 "vfu_tgt_set_base_path", 00:03:03.509 "framework_get_pci_devices", 00:03:03.509 "framework_get_config", 00:03:03.509 "framework_get_subsystems", 00:03:03.509 "fsdev_set_opts", 00:03:03.509 "fsdev_get_opts", 00:03:03.509 "trace_get_info", 00:03:03.509 "trace_get_tpoint_group_mask", 00:03:03.509 "trace_disable_tpoint_group", 00:03:03.509 "trace_enable_tpoint_group", 00:03:03.509 "trace_clear_tpoint_mask", 00:03:03.509 "trace_set_tpoint_mask", 00:03:03.509 "notify_get_notifications", 00:03:03.509 "notify_get_types", 00:03:03.509 "spdk_get_version", 00:03:03.509 "rpc_get_methods" 00:03:03.509 ] 00:03:03.509 13:46:42 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:03:03.509 13:46:42 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:03.509 13:46:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:03.509 13:46:42 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:03:03.509 13:46:42 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 606278 00:03:03.509 13:46:42 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 606278 ']' 00:03:03.509 13:46:42 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 606278 00:03:03.509 13:46:42 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:03:03.509 13:46:42 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:03.509 13:46:42 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 606278 00:03:03.509 13:46:42 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:03.509 13:46:42 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:03.509 13:46:42 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 606278' 00:03:03.509 killing process with pid 606278 00:03:03.509 13:46:42 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 606278 00:03:03.509 13:46:42 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 606278 00:03:03.769 00:03:03.769 real 0m0.864s 00:03:03.769 user 0m1.463s 00:03:03.769 sys 0m0.328s 00:03:03.769 13:46:42 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:03.769 13:46:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:03.769 ************************************ 00:03:03.769 END TEST spdkcli_tcp 00:03:03.769 ************************************ 00:03:03.769 13:46:42 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:03:03.769 13:46:42 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:03.769 13:46:42 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:03.769 13:46:42 -- common/autotest_common.sh@10 -- # set +x 00:03:03.769 ************************************ 00:03:03.769 START TEST dpdk_mem_utility 00:03:03.769 ************************************ 00:03:03.769 13:46:42 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:03:03.769 * Looking for test storage... 00:03:03.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:03:03.769 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:03.769 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:03:03.769 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:04.029 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:04.029 13:46:43 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:04.029 13:46:43 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:04.029 13:46:43 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:04.029 13:46:43 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:03:04.029 13:46:43 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:03:04.029 13:46:43 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:03:04.029 13:46:43 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:03:04.029 13:46:43 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:03:04.029 13:46:43 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:03:04.029 13:46:43 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:03:04.029 13:46:43 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:04.029 13:46:43 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:03:04.029 13:46:43 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:03:04.029 13:46:43 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:04.029 13:46:43 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:04.029 13:46:43 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:03:04.029 13:46:43 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:03:04.029 13:46:43 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:04.029 13:46:43 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:03:04.029 13:46:43 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:03:04.029 13:46:43 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:03:04.029 13:46:43 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:03:04.029 13:46:43 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:04.029 13:46:43 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:03:04.029 13:46:43 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:03:04.029 13:46:43 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:04.029 13:46:43 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:04.029 13:46:43 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:03:04.029 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:04.029 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:04.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:04.029 --rc genhtml_branch_coverage=1 00:03:04.029 --rc genhtml_function_coverage=1 00:03:04.029 --rc genhtml_legend=1 00:03:04.029 --rc geninfo_all_blocks=1 00:03:04.029 --rc geninfo_unexecuted_blocks=1 00:03:04.029 00:03:04.029 ' 00:03:04.029 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:04.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:04.029 --rc genhtml_branch_coverage=1 00:03:04.029 --rc genhtml_function_coverage=1 00:03:04.029 --rc genhtml_legend=1 00:03:04.029 --rc geninfo_all_blocks=1 00:03:04.029 --rc geninfo_unexecuted_blocks=1 00:03:04.029 00:03:04.029 ' 00:03:04.029 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:04.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:04.029 --rc genhtml_branch_coverage=1 00:03:04.029 --rc genhtml_function_coverage=1 00:03:04.029 --rc genhtml_legend=1 00:03:04.029 --rc geninfo_all_blocks=1 00:03:04.029 --rc geninfo_unexecuted_blocks=1 00:03:04.029 00:03:04.029 ' 00:03:04.029 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:04.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:04.029 --rc genhtml_branch_coverage=1 00:03:04.029 --rc genhtml_function_coverage=1 00:03:04.029 --rc genhtml_legend=1 00:03:04.029 --rc geninfo_all_blocks=1 00:03:04.029 --rc geninfo_unexecuted_blocks=1 00:03:04.029 00:03:04.029 ' 00:03:04.030 13:46:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:03:04.030 13:46:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=606683 00:03:04.030 13:46:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 606683 00:03:04.030 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 606683 ']' 00:03:04.030 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:04.030 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:04.030 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:04.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:04.030 13:46:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:04.030 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:04.030 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:04.030 [2024-11-06 13:46:43.129719] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:03:04.030 [2024-11-06 13:46:43.129770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid606683 ] 00:03:04.030 [2024-11-06 13:46:43.195031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:04.030 [2024-11-06 13:46:43.226059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:04.289 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:04.289 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:03:04.289 13:46:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:03:04.289 13:46:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:03:04.289 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:04.289 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:04.289 { 00:03:04.289 "filename": "/tmp/spdk_mem_dump.txt" 00:03:04.289 } 00:03:04.289 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:04.289 13:46:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:03:04.289 DPDK memory size 818.000000 MiB in 1 heap(s) 00:03:04.290 1 heaps totaling size 818.000000 MiB 00:03:04.290 size: 818.000000 MiB heap id: 0 00:03:04.290 end heaps---------- 00:03:04.290 9 mempools totaling size 603.782043 MiB 00:03:04.290 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:03:04.290 size: 158.602051 MiB name: PDU_data_out_Pool 00:03:04.290 size: 100.555481 MiB name: bdev_io_606683 00:03:04.290 size: 50.003479 MiB name: msgpool_606683 00:03:04.290 size: 36.509338 MiB name: fsdev_io_606683 00:03:04.290 size: 21.763794 MiB name: PDU_Pool 00:03:04.290 size: 19.513306 MiB name: SCSI_TASK_Pool 00:03:04.290 size: 4.133484 MiB name: evtpool_606683 00:03:04.290 size: 0.026123 MiB name: Session_Pool 00:03:04.290 end mempools------- 00:03:04.290 6 memzones totaling size 4.142822 MiB 00:03:04.290 size: 1.000366 MiB name: RG_ring_0_606683 00:03:04.290 size: 1.000366 MiB name: RG_ring_1_606683 00:03:04.290 size: 1.000366 MiB name: RG_ring_4_606683 00:03:04.290 size: 1.000366 MiB name: RG_ring_5_606683 00:03:04.290 size: 0.125366 MiB name: RG_ring_2_606683 00:03:04.290 size: 0.015991 MiB name: RG_ring_3_606683 00:03:04.290 end memzones------- 00:03:04.290 13:46:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:03:04.290 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:03:04.290 list of free elements. size: 10.852478 MiB 00:03:04.290 element at address: 0x200019200000 with size: 0.999878 MiB 00:03:04.290 element at address: 0x200019400000 with size: 0.999878 MiB 00:03:04.290 element at address: 0x200000400000 with size: 0.998535 MiB 00:03:04.290 element at address: 0x200032000000 with size: 0.994446 MiB 00:03:04.290 element at address: 0x200006400000 with size: 0.959839 MiB 00:03:04.290 element at address: 0x200012c00000 with size: 0.944275 MiB 00:03:04.290 element at address: 0x200019600000 with size: 0.936584 MiB 00:03:04.290 element at address: 0x200000200000 with size: 0.717346 MiB 00:03:04.290 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:03:04.290 element at address: 0x200000c00000 with size: 0.495422 MiB 00:03:04.290 element at address: 0x20000a600000 with size: 0.490723 MiB 00:03:04.290 element at address: 0x200019800000 with size: 0.485657 MiB 00:03:04.290 element at address: 0x200003e00000 with size: 0.481934 MiB 00:03:04.290 element at address: 0x200028200000 with size: 0.410034 MiB 00:03:04.290 element at address: 0x200000800000 with size: 0.355042 MiB 00:03:04.290 list of standard malloc elements. size: 199.218628 MiB 00:03:04.290 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:03:04.290 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:03:04.290 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:03:04.290 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:03:04.290 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:03:04.290 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:03:04.290 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:03:04.290 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:03:04.290 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:03:04.290 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:03:04.290 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:03:04.290 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:03:04.290 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:03:04.290 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:03:04.290 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:03:04.290 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:03:04.290 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:03:04.290 element at address: 0x20000085b040 with size: 0.000183 MiB 00:03:04.290 element at address: 0x20000085f300 with size: 0.000183 MiB 00:03:04.290 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:03:04.290 element at address: 0x20000087f680 with size: 0.000183 MiB 00:03:04.290 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:03:04.290 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:03:04.290 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:03:04.290 element at address: 0x200000cff000 with size: 0.000183 MiB 00:03:04.290 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:03:04.290 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:03:04.290 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:03:04.290 element at address: 0x200003efb980 with size: 0.000183 MiB 00:03:04.290 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:03:04.290 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:03:04.290 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:03:04.290 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:03:04.290 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:03:04.290 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:03:04.290 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:03:04.290 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:03:04.290 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:03:04.290 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:03:04.290 element at address: 0x200028268f80 with size: 0.000183 MiB 00:03:04.290 element at address: 0x200028269040 with size: 0.000183 MiB 00:03:04.290 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:03:04.290 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:03:04.290 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:03:04.290 list of memzone associated elements. size: 607.928894 MiB 00:03:04.290 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:03:04.290 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:03:04.290 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:03:04.290 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:03:04.290 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:03:04.290 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_606683_0 00:03:04.290 element at address: 0x200000dff380 with size: 48.003052 MiB 00:03:04.290 associated memzone info: size: 48.002930 MiB name: MP_msgpool_606683_0 00:03:04.290 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:03:04.290 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_606683_0 00:03:04.290 element at address: 0x2000199be940 with size: 20.255554 MiB 00:03:04.290 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:03:04.290 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:03:04.290 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:03:04.290 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:03:04.290 associated memzone info: size: 3.000122 MiB name: MP_evtpool_606683_0 00:03:04.290 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:03:04.290 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_606683 00:03:04.290 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:03:04.290 associated memzone info: size: 1.007996 MiB name: MP_evtpool_606683 00:03:04.290 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:03:04.290 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:03:04.290 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:03:04.290 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:03:04.290 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:03:04.290 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:03:04.290 element at address: 0x200003efba40 with size: 1.008118 MiB 00:03:04.290 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:03:04.290 element at address: 0x200000cff180 with size: 1.000488 MiB 00:03:04.290 associated memzone info: size: 1.000366 MiB name: RG_ring_0_606683 00:03:04.290 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:03:04.290 associated memzone info: size: 1.000366 MiB name: RG_ring_1_606683 00:03:04.290 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:03:04.290 associated memzone info: size: 1.000366 MiB name: RG_ring_4_606683 00:03:04.290 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:03:04.290 associated memzone info: size: 1.000366 MiB name: RG_ring_5_606683 00:03:04.290 element at address: 0x20000087f740 with size: 0.500488 MiB 00:03:04.290 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_606683 00:03:04.290 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:03:04.290 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_606683 00:03:04.290 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:03:04.290 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:03:04.290 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:03:04.290 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:03:04.290 element at address: 0x20001987c540 with size: 0.250488 MiB 00:03:04.290 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:03:04.290 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:03:04.290 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_606683 00:03:04.290 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:03:04.290 associated memzone info: size: 0.125366 MiB name: RG_ring_2_606683 00:03:04.290 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:03:04.290 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:03:04.290 element at address: 0x200028269100 with size: 0.023743 MiB 00:03:04.290 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:03:04.290 element at address: 0x20000085b100 with size: 0.016113 MiB 00:03:04.290 associated memzone info: size: 0.015991 MiB name: RG_ring_3_606683 00:03:04.290 element at address: 0x20002826f240 with size: 0.002441 MiB 00:03:04.290 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:03:04.290 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:03:04.290 associated memzone info: size: 0.000183 MiB name: MP_msgpool_606683 00:03:04.290 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:03:04.290 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_606683 00:03:04.290 element at address: 0x20000085af00 with size: 0.000305 MiB 00:03:04.290 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_606683 00:03:04.290 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:03:04.290 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:03:04.290 13:46:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:03:04.291 13:46:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 606683 00:03:04.291 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 606683 ']' 00:03:04.291 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 606683 00:03:04.291 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:03:04.291 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:04.291 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 606683 00:03:04.291 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:04.291 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:04.291 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 606683' 00:03:04.291 killing process with pid 606683 00:03:04.291 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 606683 00:03:04.291 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 606683 00:03:04.551 00:03:04.551 real 0m0.725s 00:03:04.551 user 0m0.675s 00:03:04.551 sys 0m0.318s 00:03:04.551 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:04.551 13:46:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:04.551 ************************************ 00:03:04.551 END TEST dpdk_mem_utility 00:03:04.551 ************************************ 00:03:04.551 13:46:43 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:03:04.551 13:46:43 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:04.551 13:46:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:04.551 13:46:43 -- common/autotest_common.sh@10 -- # set +x 00:03:04.551 ************************************ 00:03:04.551 START TEST event 00:03:04.551 ************************************ 00:03:04.551 13:46:43 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:03:04.551 * Looking for test storage... 00:03:04.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:03:04.551 13:46:43 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:04.551 13:46:43 event -- common/autotest_common.sh@1691 -- # lcov --version 00:03:04.551 13:46:43 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:04.810 13:46:43 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:04.810 13:46:43 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:04.810 13:46:43 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:04.810 13:46:43 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:04.811 13:46:43 event -- scripts/common.sh@336 -- # IFS=.-: 00:03:04.811 13:46:43 event -- scripts/common.sh@336 -- # read -ra ver1 00:03:04.811 13:46:43 event -- scripts/common.sh@337 -- # IFS=.-: 00:03:04.811 13:46:43 event -- scripts/common.sh@337 -- # read -ra ver2 00:03:04.811 13:46:43 event -- scripts/common.sh@338 -- # local 'op=<' 00:03:04.811 13:46:43 event -- scripts/common.sh@340 -- # ver1_l=2 00:03:04.811 13:46:43 event -- scripts/common.sh@341 -- # ver2_l=1 00:03:04.811 13:46:43 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:04.811 13:46:43 event -- scripts/common.sh@344 -- # case "$op" in 00:03:04.811 13:46:43 event -- scripts/common.sh@345 -- # : 1 00:03:04.811 13:46:43 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:04.811 13:46:43 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:04.811 13:46:43 event -- scripts/common.sh@365 -- # decimal 1 00:03:04.811 13:46:43 event -- scripts/common.sh@353 -- # local d=1 00:03:04.811 13:46:43 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:04.811 13:46:43 event -- scripts/common.sh@355 -- # echo 1 00:03:04.811 13:46:43 event -- scripts/common.sh@365 -- # ver1[v]=1 00:03:04.811 13:46:43 event -- scripts/common.sh@366 -- # decimal 2 00:03:04.811 13:46:43 event -- scripts/common.sh@353 -- # local d=2 00:03:04.811 13:46:43 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:04.811 13:46:43 event -- scripts/common.sh@355 -- # echo 2 00:03:04.811 13:46:43 event -- scripts/common.sh@366 -- # ver2[v]=2 00:03:04.811 13:46:43 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:04.811 13:46:43 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:04.811 13:46:43 event -- scripts/common.sh@368 -- # return 0 00:03:04.811 13:46:43 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:04.811 13:46:43 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:04.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:04.811 --rc genhtml_branch_coverage=1 00:03:04.811 --rc genhtml_function_coverage=1 00:03:04.811 --rc genhtml_legend=1 00:03:04.811 --rc geninfo_all_blocks=1 00:03:04.811 --rc geninfo_unexecuted_blocks=1 00:03:04.811 00:03:04.811 ' 00:03:04.811 13:46:43 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:04.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:04.811 --rc genhtml_branch_coverage=1 00:03:04.811 --rc genhtml_function_coverage=1 00:03:04.811 --rc genhtml_legend=1 00:03:04.811 --rc geninfo_all_blocks=1 00:03:04.811 --rc geninfo_unexecuted_blocks=1 00:03:04.811 00:03:04.811 ' 00:03:04.811 13:46:43 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:04.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:04.811 --rc genhtml_branch_coverage=1 00:03:04.811 --rc genhtml_function_coverage=1 00:03:04.811 --rc genhtml_legend=1 00:03:04.811 --rc geninfo_all_blocks=1 00:03:04.811 --rc geninfo_unexecuted_blocks=1 00:03:04.811 00:03:04.811 ' 00:03:04.811 13:46:43 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:04.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:04.811 --rc genhtml_branch_coverage=1 00:03:04.811 --rc genhtml_function_coverage=1 00:03:04.811 --rc genhtml_legend=1 00:03:04.811 --rc geninfo_all_blocks=1 00:03:04.811 --rc geninfo_unexecuted_blocks=1 00:03:04.811 00:03:04.811 ' 00:03:04.811 13:46:43 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:03:04.811 13:46:43 event -- bdev/nbd_common.sh@6 -- # set -e 00:03:04.811 13:46:43 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:03:04.811 13:46:43 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:03:04.811 13:46:43 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:04.811 13:46:43 event -- common/autotest_common.sh@10 -- # set +x 00:03:04.811 ************************************ 00:03:04.811 START TEST event_perf 00:03:04.811 ************************************ 00:03:04.811 13:46:43 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:03:04.811 Running I/O for 1 seconds...[2024-11-06 13:46:43.909011] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:03:04.811 [2024-11-06 13:46:43.909061] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid606760 ] 00:03:04.811 [2024-11-06 13:46:43.978743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:03:04.811 [2024-11-06 13:46:44.018568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:04.811 [2024-11-06 13:46:44.018723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:03:04.811 [2024-11-06 13:46:44.018875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:04.811 Running I/O for 1 seconds...[2024-11-06 13:46:44.018876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:03:06.191 00:03:06.191 lcore 0: 191205 00:03:06.191 lcore 1: 191207 00:03:06.191 lcore 2: 191205 00:03:06.191 lcore 3: 191202 00:03:06.191 done. 00:03:06.191 00:03:06.191 real 0m1.146s 00:03:06.191 user 0m4.080s 00:03:06.191 sys 0m0.064s 00:03:06.191 13:46:45 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:06.191 13:46:45 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:03:06.191 ************************************ 00:03:06.191 END TEST event_perf 00:03:06.191 ************************************ 00:03:06.191 13:46:45 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:03:06.191 13:46:45 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:03:06.191 13:46:45 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:06.191 13:46:45 event -- common/autotest_common.sh@10 -- # set +x 00:03:06.191 ************************************ 00:03:06.191 START TEST event_reactor 00:03:06.191 ************************************ 00:03:06.191 13:46:45 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:03:06.191 [2024-11-06 13:46:45.102041] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:03:06.191 [2024-11-06 13:46:45.102085] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid607115 ] 00:03:06.191 [2024-11-06 13:46:45.167029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:06.191 [2024-11-06 13:46:45.196603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:07.130 test_start 00:03:07.130 oneshot 00:03:07.130 tick 100 00:03:07.130 tick 100 00:03:07.130 tick 250 00:03:07.130 tick 100 00:03:07.130 tick 100 00:03:07.130 tick 250 00:03:07.130 tick 100 00:03:07.130 tick 500 00:03:07.130 tick 100 00:03:07.130 tick 100 00:03:07.130 tick 250 00:03:07.130 tick 100 00:03:07.130 tick 100 00:03:07.130 test_end 00:03:07.130 00:03:07.130 real 0m1.129s 00:03:07.130 user 0m1.069s 00:03:07.130 sys 0m0.056s 00:03:07.130 13:46:46 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:07.130 13:46:46 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:03:07.130 ************************************ 00:03:07.130 END TEST event_reactor 00:03:07.130 ************************************ 00:03:07.130 13:46:46 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:03:07.130 13:46:46 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:03:07.130 13:46:46 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:07.130 13:46:46 event -- common/autotest_common.sh@10 -- # set +x 00:03:07.130 ************************************ 00:03:07.130 START TEST event_reactor_perf 00:03:07.130 ************************************ 00:03:07.130 13:46:46 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:03:07.130 [2024-11-06 13:46:46.274855] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:03:07.130 [2024-11-06 13:46:46.274899] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid607463 ] 00:03:07.130 [2024-11-06 13:46:46.339541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:07.130 [2024-11-06 13:46:46.368823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:08.510 test_start 00:03:08.510 test_end 00:03:08.510 Performance: 534548 events per second 00:03:08.510 00:03:08.510 real 0m1.128s 00:03:08.510 user 0m1.068s 00:03:08.510 sys 0m0.056s 00:03:08.510 13:46:47 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:08.510 13:46:47 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:03:08.510 ************************************ 00:03:08.510 END TEST event_reactor_perf 00:03:08.510 ************************************ 00:03:08.510 13:46:47 event -- event/event.sh@49 -- # uname -s 00:03:08.510 13:46:47 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:03:08.510 13:46:47 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:03:08.510 13:46:47 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:08.510 13:46:47 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:08.510 13:46:47 event -- common/autotest_common.sh@10 -- # set +x 00:03:08.510 ************************************ 00:03:08.510 START TEST event_scheduler 00:03:08.510 ************************************ 00:03:08.510 13:46:47 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:03:08.510 * Looking for test storage... 00:03:08.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:03:08.510 13:46:47 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:08.510 13:46:47 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:03:08.510 13:46:47 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:08.510 13:46:47 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:08.510 13:46:47 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:08.510 13:46:47 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:08.510 13:46:47 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:08.510 13:46:47 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:03:08.510 13:46:47 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:03:08.510 13:46:47 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:03:08.510 13:46:47 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:03:08.510 13:46:47 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:03:08.510 13:46:47 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:03:08.510 13:46:47 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:03:08.510 13:46:47 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:08.510 13:46:47 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:03:08.510 13:46:47 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:03:08.510 13:46:47 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:08.510 13:46:47 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:08.510 13:46:47 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:03:08.510 13:46:47 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:03:08.510 13:46:47 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:08.510 13:46:47 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:03:08.510 13:46:47 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:03:08.510 13:46:47 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:03:08.510 13:46:47 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:03:08.510 13:46:47 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:08.510 13:46:47 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:03:08.510 13:46:47 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:03:08.510 13:46:47 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:08.510 13:46:47 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:08.510 13:46:47 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:03:08.510 13:46:47 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:08.510 13:46:47 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:08.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.510 --rc genhtml_branch_coverage=1 00:03:08.510 --rc genhtml_function_coverage=1 00:03:08.510 --rc genhtml_legend=1 00:03:08.510 --rc geninfo_all_blocks=1 00:03:08.510 --rc geninfo_unexecuted_blocks=1 00:03:08.510 00:03:08.510 ' 00:03:08.510 13:46:47 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:08.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.510 --rc genhtml_branch_coverage=1 00:03:08.510 --rc genhtml_function_coverage=1 00:03:08.510 --rc genhtml_legend=1 00:03:08.510 --rc geninfo_all_blocks=1 00:03:08.510 --rc geninfo_unexecuted_blocks=1 00:03:08.510 00:03:08.510 ' 00:03:08.510 13:46:47 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:08.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.510 --rc genhtml_branch_coverage=1 00:03:08.510 --rc genhtml_function_coverage=1 00:03:08.510 --rc genhtml_legend=1 00:03:08.510 --rc geninfo_all_blocks=1 00:03:08.510 --rc geninfo_unexecuted_blocks=1 00:03:08.510 00:03:08.510 ' 00:03:08.510 13:46:47 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:08.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.510 --rc genhtml_branch_coverage=1 00:03:08.510 --rc genhtml_function_coverage=1 00:03:08.510 --rc genhtml_legend=1 00:03:08.510 --rc geninfo_all_blocks=1 00:03:08.510 --rc geninfo_unexecuted_blocks=1 00:03:08.510 00:03:08.510 ' 00:03:08.510 13:46:47 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:03:08.510 13:46:47 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=607846 00:03:08.510 13:46:47 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:03:08.510 13:46:47 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 607846 00:03:08.510 13:46:47 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 607846 ']' 00:03:08.510 13:46:47 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:08.510 13:46:47 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:08.510 13:46:47 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:08.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:08.511 13:46:47 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:03:08.511 13:46:47 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:08.511 13:46:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:08.511 [2024-11-06 13:46:47.596909] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:03:08.511 [2024-11-06 13:46:47.596973] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid607846 ] 00:03:08.511 [2024-11-06 13:46:47.681647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:03:08.511 [2024-11-06 13:46:47.737059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:08.511 [2024-11-06 13:46:47.737225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:08.511 [2024-11-06 13:46:47.737363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:03:08.511 [2024-11-06 13:46:47.737509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:03:09.450 13:46:48 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:09.450 13:46:48 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:03:09.450 13:46:48 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:03:09.450 13:46:48 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:09.450 13:46:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:09.450 [2024-11-06 13:46:48.392040] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:03:09.450 [2024-11-06 13:46:48.392057] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:03:09.450 [2024-11-06 13:46:48.392067] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:03:09.450 [2024-11-06 13:46:48.392073] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:03:09.450 [2024-11-06 13:46:48.392079] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:03:09.450 13:46:48 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:09.450 13:46:48 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:03:09.450 13:46:48 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:09.450 13:46:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:09.450 [2024-11-06 13:46:48.455555] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:03:09.450 13:46:48 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:09.450 13:46:48 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:03:09.450 13:46:48 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:09.450 13:46:48 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:09.450 13:46:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:09.450 ************************************ 00:03:09.450 START TEST scheduler_create_thread 00:03:09.450 ************************************ 00:03:09.450 13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:03:09.450 13:46:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:03:09.450 13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:09.450 13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:09.450 2 00:03:09.450 13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:09.450 13:46:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:03:09.450 13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:09.450 13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:09.451 3 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:09.451 4 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:09.451 5 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:09.451 6 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:09.451 7 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:09.451 8 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:09.451 9 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:09.451 10 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:09.451 13:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:10.020 13:46:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:10.021 13:46:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:03:10.021 13:46:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:03:10.021 13:46:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:10.021 13:46:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:11.397 13:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:11.397 00:03:11.397 real 0m1.756s 00:03:11.397 user 0m0.015s 00:03:11.397 sys 0m0.004s 00:03:11.397 13:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:11.397 13:46:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:11.397 ************************************ 00:03:11.397 END TEST scheduler_create_thread 00:03:11.397 ************************************ 00:03:11.397 13:46:50 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:03:11.398 13:46:50 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 607846 00:03:11.398 13:46:50 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 607846 ']' 00:03:11.398 13:46:50 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 607846 00:03:11.398 13:46:50 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:03:11.398 13:46:50 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:11.398 13:46:50 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 607846 00:03:11.398 13:46:50 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:03:11.398 13:46:50 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:03:11.398 13:46:50 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 607846' 00:03:11.398 killing process with pid 607846 00:03:11.398 13:46:50 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 607846 00:03:11.398 13:46:50 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 607846 00:03:11.656 [2024-11-06 13:46:50.722412] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:03:11.656 00:03:11.656 real 0m3.371s 00:03:11.656 user 0m6.000s 00:03:11.656 sys 0m0.333s 00:03:11.656 13:46:50 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:11.656 13:46:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:11.656 ************************************ 00:03:11.656 END TEST event_scheduler 00:03:11.656 ************************************ 00:03:11.656 13:46:50 event -- event/event.sh@51 -- # modprobe -n nbd 00:03:11.656 13:46:50 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:03:11.656 13:46:50 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:11.656 13:46:50 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:11.656 13:46:50 event -- common/autotest_common.sh@10 -- # set +x 00:03:11.656 ************************************ 00:03:11.656 START TEST app_repeat 00:03:11.656 ************************************ 00:03:11.656 13:46:50 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:03:11.656 13:46:50 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:11.656 13:46:50 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:11.656 13:46:50 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:03:11.656 13:46:50 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:03:11.656 13:46:50 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:03:11.656 13:46:50 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:03:11.657 13:46:50 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:03:11.657 13:46:50 event.app_repeat -- event/event.sh@19 -- # repeat_pid=608566 00:03:11.657 13:46:50 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:03:11.657 13:46:50 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 608566' 00:03:11.657 Process app_repeat pid: 608566 00:03:11.657 13:46:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:03:11.657 13:46:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:03:11.657 spdk_app_start Round 0 00:03:11.657 13:46:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 608566 /var/tmp/spdk-nbd.sock 00:03:11.657 13:46:50 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 608566 ']' 00:03:11.657 13:46:50 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:03:11.657 13:46:50 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:11.657 13:46:50 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:03:11.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:03:11.657 13:46:50 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:11.657 13:46:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:03:11.657 13:46:50 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:03:11.657 [2024-11-06 13:46:50.882273] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:03:11.657 [2024-11-06 13:46:50.882322] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid608566 ] 00:03:11.916 [2024-11-06 13:46:50.947926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:11.916 [2024-11-06 13:46:50.980261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:11.916 [2024-11-06 13:46:50.980275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:11.916 13:46:51 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:11.916 13:46:51 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:03:11.916 13:46:51 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:03:11.916 Malloc0 00:03:12.174 13:46:51 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:03:12.174 Malloc1 00:03:12.174 13:46:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:03:12.174 13:46:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:12.174 13:46:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:03:12.174 13:46:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:03:12.174 13:46:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:12.174 13:46:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:03:12.174 13:46:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:03:12.174 13:46:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:12.174 13:46:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:03:12.174 13:46:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:03:12.174 13:46:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:12.174 13:46:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:03:12.174 13:46:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:03:12.174 13:46:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:03:12.174 13:46:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:12.174 13:46:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:03:12.433 /dev/nbd0 00:03:12.433 13:46:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:03:12.433 13:46:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:03:12.433 13:46:51 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:03:12.433 13:46:51 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:03:12.433 13:46:51 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:03:12.433 13:46:51 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:03:12.433 13:46:51 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:03:12.433 13:46:51 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:03:12.433 13:46:51 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:03:12.433 13:46:51 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:03:12.433 13:46:51 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:03:12.433 1+0 records in 00:03:12.433 1+0 records out 00:03:12.433 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000171026 s, 23.9 MB/s 00:03:12.433 13:46:51 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:12.433 13:46:51 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:03:12.433 13:46:51 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:12.433 13:46:51 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:03:12.433 13:46:51 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:03:12.433 13:46:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:03:12.433 13:46:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:12.433 13:46:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:03:12.433 /dev/nbd1 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:03:12.693 13:46:51 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:03:12.693 13:46:51 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:03:12.693 13:46:51 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:03:12.693 13:46:51 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:03:12.693 13:46:51 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:03:12.693 13:46:51 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:03:12.693 13:46:51 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:03:12.693 13:46:51 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:03:12.693 13:46:51 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:03:12.693 1+0 records in 00:03:12.693 1+0 records out 00:03:12.693 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000119472 s, 34.3 MB/s 00:03:12.693 13:46:51 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:12.693 13:46:51 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:03:12.693 13:46:51 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:12.693 13:46:51 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:03:12.693 13:46:51 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:03:12.693 { 00:03:12.693 "nbd_device": "/dev/nbd0", 00:03:12.693 "bdev_name": "Malloc0" 00:03:12.693 }, 00:03:12.693 { 00:03:12.693 "nbd_device": "/dev/nbd1", 00:03:12.693 "bdev_name": "Malloc1" 00:03:12.693 } 00:03:12.693 ]' 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:03:12.693 { 00:03:12.693 "nbd_device": "/dev/nbd0", 00:03:12.693 "bdev_name": "Malloc0" 00:03:12.693 }, 00:03:12.693 { 00:03:12.693 "nbd_device": "/dev/nbd1", 00:03:12.693 "bdev_name": "Malloc1" 00:03:12.693 } 00:03:12.693 ]' 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:03:12.693 /dev/nbd1' 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:03:12.693 /dev/nbd1' 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:03:12.693 256+0 records in 00:03:12.693 256+0 records out 00:03:12.693 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00431719 s, 243 MB/s 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:03:12.693 256+0 records in 00:03:12.693 256+0 records out 00:03:12.693 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114155 s, 91.9 MB/s 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:03:12.693 256+0 records in 00:03:12.693 256+0 records out 00:03:12.693 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127191 s, 82.4 MB/s 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:03:12.693 13:46:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:12.694 13:46:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:03:12.694 13:46:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:12.694 13:46:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:12.694 13:46:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:03:12.694 13:46:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:03:12.694 13:46:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:03:12.694 13:46:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:03:12.952 13:46:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:03:12.953 13:46:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:03:12.953 13:46:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:03:12.953 13:46:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:03:12.953 13:46:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:03:12.953 13:46:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:03:12.953 13:46:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:03:12.953 13:46:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:03:12.953 13:46:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:03:12.953 13:46:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:03:13.211 13:46:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:03:13.211 13:46:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:03:13.211 13:46:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:03:13.211 13:46:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:03:13.211 13:46:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:03:13.211 13:46:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:03:13.211 13:46:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:03:13.211 13:46:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:03:13.211 13:46:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:03:13.211 13:46:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:13.211 13:46:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:03:13.211 13:46:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:03:13.211 13:46:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:03:13.211 13:46:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:03:13.211 13:46:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:03:13.211 13:46:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:03:13.211 13:46:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:03:13.211 13:46:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:03:13.211 13:46:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:03:13.211 13:46:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:03:13.211 13:46:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:03:13.470 13:46:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:03:13.470 13:46:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:03:13.470 13:46:52 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:03:13.471 13:46:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:03:13.728 [2024-11-06 13:46:52.756903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:13.728 [2024-11-06 13:46:52.786295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:13.728 [2024-11-06 13:46:52.786295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:13.729 [2024-11-06 13:46:52.815625] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:03:13.729 [2024-11-06 13:46:52.815654] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:03:17.015 13:46:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:03:17.016 13:46:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:03:17.016 spdk_app_start Round 1 00:03:17.016 13:46:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 608566 /var/tmp/spdk-nbd.sock 00:03:17.016 13:46:55 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 608566 ']' 00:03:17.016 13:46:55 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:03:17.016 13:46:55 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:17.016 13:46:55 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:03:17.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:03:17.016 13:46:55 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:17.016 13:46:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:03:17.016 13:46:55 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:17.016 13:46:55 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:03:17.016 13:46:55 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:03:17.016 Malloc0 00:03:17.016 13:46:55 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:03:17.016 Malloc1 00:03:17.016 13:46:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:03:17.016 13:46:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:17.016 13:46:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:03:17.016 13:46:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:03:17.016 13:46:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:17.016 13:46:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:03:17.016 13:46:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:03:17.016 13:46:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:17.016 13:46:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:03:17.016 13:46:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:03:17.016 13:46:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:17.016 13:46:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:03:17.016 13:46:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:03:17.016 13:46:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:03:17.016 13:46:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:17.016 13:46:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:03:17.274 /dev/nbd0 00:03:17.274 13:46:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:03:17.274 13:46:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:03:17.274 13:46:56 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:03:17.274 13:46:56 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:03:17.274 13:46:56 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:03:17.274 13:46:56 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:03:17.274 13:46:56 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:03:17.274 13:46:56 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:03:17.274 13:46:56 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:03:17.274 13:46:56 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:03:17.274 13:46:56 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:03:17.274 1+0 records in 00:03:17.274 1+0 records out 00:03:17.274 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184022 s, 22.3 MB/s 00:03:17.274 13:46:56 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:17.274 13:46:56 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:03:17.274 13:46:56 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:17.275 13:46:56 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:03:17.275 13:46:56 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:03:17.275 13:46:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:03:17.275 13:46:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:17.275 13:46:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:03:17.275 /dev/nbd1 00:03:17.275 13:46:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:03:17.275 13:46:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:03:17.275 13:46:56 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:03:17.275 13:46:56 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:03:17.275 13:46:56 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:03:17.275 13:46:56 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:03:17.275 13:46:56 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:03:17.275 13:46:56 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:03:17.275 13:46:56 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:03:17.275 13:46:56 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:03:17.275 13:46:56 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:03:17.275 1+0 records in 00:03:17.275 1+0 records out 00:03:17.275 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000169278 s, 24.2 MB/s 00:03:17.275 13:46:56 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:17.275 13:46:56 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:03:17.275 13:46:56 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:17.275 13:46:56 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:03:17.275 13:46:56 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:03:17.275 13:46:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:03:17.275 13:46:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:17.275 13:46:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:03:17.275 13:46:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:17.275 13:46:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:03:17.533 13:46:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:03:17.533 { 00:03:17.533 "nbd_device": "/dev/nbd0", 00:03:17.533 "bdev_name": "Malloc0" 00:03:17.533 }, 00:03:17.533 { 00:03:17.533 "nbd_device": "/dev/nbd1", 00:03:17.533 "bdev_name": "Malloc1" 00:03:17.533 } 00:03:17.533 ]' 00:03:17.533 13:46:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:03:17.533 { 00:03:17.533 "nbd_device": "/dev/nbd0", 00:03:17.533 "bdev_name": "Malloc0" 00:03:17.533 }, 00:03:17.533 { 00:03:17.534 "nbd_device": "/dev/nbd1", 00:03:17.534 "bdev_name": "Malloc1" 00:03:17.534 } 00:03:17.534 ]' 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:03:17.534 /dev/nbd1' 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:03:17.534 /dev/nbd1' 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:03:17.534 256+0 records in 00:03:17.534 256+0 records out 00:03:17.534 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00484692 s, 216 MB/s 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:03:17.534 256+0 records in 00:03:17.534 256+0 records out 00:03:17.534 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114954 s, 91.2 MB/s 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:03:17.534 256+0 records in 00:03:17.534 256+0 records out 00:03:17.534 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132922 s, 78.9 MB/s 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:03:17.534 13:46:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:03:17.792 13:46:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:03:17.792 13:46:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:03:17.792 13:46:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:03:17.792 13:46:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:03:17.792 13:46:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:03:17.792 13:46:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:03:17.792 13:46:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:03:17.792 13:46:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:03:17.792 13:46:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:03:17.792 13:46:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:03:17.792 13:46:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:03:18.050 13:46:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:03:18.050 13:46:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:03:18.050 13:46:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:03:18.050 13:46:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:03:18.050 13:46:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:03:18.050 13:46:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:03:18.050 13:46:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:03:18.050 13:46:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:03:18.050 13:46:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:18.050 13:46:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:03:18.050 13:46:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:03:18.050 13:46:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:03:18.050 13:46:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:03:18.050 13:46:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:03:18.050 13:46:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:03:18.050 13:46:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:03:18.050 13:46:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:03:18.050 13:46:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:03:18.050 13:46:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:03:18.050 13:46:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:03:18.050 13:46:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:03:18.050 13:46:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:03:18.050 13:46:57 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:03:18.309 13:46:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:03:18.309 [2024-11-06 13:46:57.531774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:18.309 [2024-11-06 13:46:57.561434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:18.309 [2024-11-06 13:46:57.561438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:18.309 [2024-11-06 13:46:57.591380] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:03:18.309 [2024-11-06 13:46:57.591410] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:03:21.687 13:47:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:03:21.687 13:47:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:03:21.687 spdk_app_start Round 2 00:03:21.687 13:47:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 608566 /var/tmp/spdk-nbd.sock 00:03:21.687 13:47:00 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 608566 ']' 00:03:21.687 13:47:00 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:03:21.687 13:47:00 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:21.687 13:47:00 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:03:21.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:03:21.687 13:47:00 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:21.687 13:47:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:03:21.687 13:47:00 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:21.687 13:47:00 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:03:21.687 13:47:00 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:03:21.687 Malloc0 00:03:21.687 13:47:00 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:03:21.687 Malloc1 00:03:21.687 13:47:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:03:21.687 13:47:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:21.687 13:47:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:03:21.687 13:47:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:03:21.687 13:47:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:21.687 13:47:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:03:21.687 13:47:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:03:21.687 13:47:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:21.687 13:47:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:03:21.687 13:47:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:03:21.687 13:47:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:21.687 13:47:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:03:21.687 13:47:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:03:21.687 13:47:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:03:21.687 13:47:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:21.687 13:47:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:03:21.947 /dev/nbd0 00:03:21.947 13:47:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:03:21.947 13:47:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:03:21.947 13:47:01 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:03:21.947 13:47:01 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:03:21.947 13:47:01 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:03:21.947 13:47:01 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:03:21.947 13:47:01 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:03:21.947 13:47:01 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:03:21.947 13:47:01 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:03:21.947 13:47:01 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:03:21.947 13:47:01 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:03:21.947 1+0 records in 00:03:21.947 1+0 records out 00:03:21.947 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000194188 s, 21.1 MB/s 00:03:21.947 13:47:01 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:21.947 13:47:01 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:03:21.947 13:47:01 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:21.947 13:47:01 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:03:21.947 13:47:01 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:03:21.947 13:47:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:03:21.947 13:47:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:21.947 13:47:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:03:22.206 /dev/nbd1 00:03:22.206 13:47:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:03:22.206 13:47:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:03:22.206 13:47:01 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:03:22.206 13:47:01 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:03:22.206 13:47:01 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:03:22.206 13:47:01 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:03:22.206 13:47:01 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:03:22.206 13:47:01 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:03:22.206 13:47:01 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:03:22.206 13:47:01 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:03:22.206 13:47:01 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:03:22.206 1+0 records in 00:03:22.206 1+0 records out 00:03:22.206 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189748 s, 21.6 MB/s 00:03:22.206 13:47:01 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:22.206 13:47:01 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:03:22.206 13:47:01 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:22.206 13:47:01 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:03:22.206 13:47:01 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:03:22.206 13:47:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:03:22.206 13:47:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:22.206 13:47:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:03:22.206 13:47:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:22.206 13:47:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:03:22.206 13:47:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:03:22.206 { 00:03:22.206 "nbd_device": "/dev/nbd0", 00:03:22.206 "bdev_name": "Malloc0" 00:03:22.206 }, 00:03:22.206 { 00:03:22.206 "nbd_device": "/dev/nbd1", 00:03:22.206 "bdev_name": "Malloc1" 00:03:22.206 } 00:03:22.206 ]' 00:03:22.206 13:47:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:03:22.206 13:47:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:03:22.206 { 00:03:22.206 "nbd_device": "/dev/nbd0", 00:03:22.206 "bdev_name": "Malloc0" 00:03:22.206 }, 00:03:22.206 { 00:03:22.206 "nbd_device": "/dev/nbd1", 00:03:22.206 "bdev_name": "Malloc1" 00:03:22.206 } 00:03:22.206 ]' 00:03:22.464 13:47:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:03:22.464 /dev/nbd1' 00:03:22.464 13:47:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:03:22.464 /dev/nbd1' 00:03:22.464 13:47:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:03:22.464 13:47:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:03:22.464 13:47:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:03:22.464 13:47:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:03:22.464 13:47:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:03:22.464 13:47:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:03:22.464 13:47:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:22.464 13:47:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:03:22.464 13:47:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:03:22.464 13:47:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:22.464 13:47:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:03:22.465 13:47:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:03:22.465 256+0 records in 00:03:22.465 256+0 records out 00:03:22.465 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00431658 s, 243 MB/s 00:03:22.465 13:47:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:03:22.465 13:47:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:03:22.465 256+0 records in 00:03:22.465 256+0 records out 00:03:22.465 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115348 s, 90.9 MB/s 00:03:22.465 13:47:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:03:22.465 13:47:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:03:22.465 256+0 records in 00:03:22.465 256+0 records out 00:03:22.465 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122399 s, 85.7 MB/s 00:03:22.465 13:47:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:03:22.465 13:47:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:22.465 13:47:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:03:22.465 13:47:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:03:22.465 13:47:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:22.465 13:47:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:03:22.465 13:47:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:03:22.465 13:47:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:03:22.465 13:47:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:03:22.465 13:47:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:03:22.465 13:47:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:03:22.465 13:47:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:22.465 13:47:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:03:22.465 13:47:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:22.465 13:47:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:22.465 13:47:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:03:22.465 13:47:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:03:22.465 13:47:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:03:22.465 13:47:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:03:22.465 13:47:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:03:22.465 13:47:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:03:22.465 13:47:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:03:22.465 13:47:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:03:22.465 13:47:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:03:22.465 13:47:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:03:22.465 13:47:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:03:22.465 13:47:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:03:22.465 13:47:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:03:22.465 13:47:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:03:22.724 13:47:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:03:22.724 13:47:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:03:22.724 13:47:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:03:22.724 13:47:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:03:22.724 13:47:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:03:22.724 13:47:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:03:22.724 13:47:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:03:22.724 13:47:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:03:22.724 13:47:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:03:22.724 13:47:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:22.724 13:47:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:03:22.983 13:47:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:03:22.983 13:47:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:03:22.983 13:47:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:03:22.983 13:47:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:03:22.983 13:47:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:03:22.983 13:47:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:03:22.983 13:47:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:03:22.983 13:47:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:03:22.983 13:47:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:03:22.983 13:47:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:03:22.983 13:47:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:03:22.983 13:47:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:03:22.983 13:47:02 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:03:22.983 13:47:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:03:23.242 [2024-11-06 13:47:02.324115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:23.242 [2024-11-06 13:47:02.352729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:23.242 [2024-11-06 13:47:02.352732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:23.242 [2024-11-06 13:47:02.382179] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:03:23.242 [2024-11-06 13:47:02.382210] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:03:26.531 13:47:05 event.app_repeat -- event/event.sh@38 -- # waitforlisten 608566 /var/tmp/spdk-nbd.sock 00:03:26.531 13:47:05 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 608566 ']' 00:03:26.531 13:47:05 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:03:26.531 13:47:05 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:26.531 13:47:05 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:03:26.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:03:26.531 13:47:05 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:26.531 13:47:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:03:26.531 13:47:05 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:26.531 13:47:05 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:03:26.531 13:47:05 event.app_repeat -- event/event.sh@39 -- # killprocess 608566 00:03:26.531 13:47:05 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 608566 ']' 00:03:26.531 13:47:05 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 608566 00:03:26.531 13:47:05 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:03:26.531 13:47:05 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:26.531 13:47:05 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 608566 00:03:26.531 13:47:05 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:26.531 13:47:05 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:26.531 13:47:05 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 608566' 00:03:26.531 killing process with pid 608566 00:03:26.531 13:47:05 event.app_repeat -- common/autotest_common.sh@971 -- # kill 608566 00:03:26.532 13:47:05 event.app_repeat -- common/autotest_common.sh@976 -- # wait 608566 00:03:26.532 spdk_app_start is called in Round 0. 00:03:26.532 Shutdown signal received, stop current app iteration 00:03:26.532 Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 reinitialization... 00:03:26.532 spdk_app_start is called in Round 1. 00:03:26.532 Shutdown signal received, stop current app iteration 00:03:26.532 Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 reinitialization... 00:03:26.532 spdk_app_start is called in Round 2. 00:03:26.532 Shutdown signal received, stop current app iteration 00:03:26.532 Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 reinitialization... 00:03:26.532 spdk_app_start is called in Round 3. 00:03:26.532 Shutdown signal received, stop current app iteration 00:03:26.532 13:47:05 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:03:26.532 13:47:05 event.app_repeat -- event/event.sh@42 -- # return 0 00:03:26.532 00:03:26.532 real 0m14.663s 00:03:26.532 user 0m32.069s 00:03:26.532 sys 0m1.839s 00:03:26.532 13:47:05 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:26.532 13:47:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:03:26.532 ************************************ 00:03:26.532 END TEST app_repeat 00:03:26.532 ************************************ 00:03:26.532 13:47:05 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:03:26.532 13:47:05 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:03:26.532 13:47:05 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:26.532 13:47:05 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:26.532 13:47:05 event -- common/autotest_common.sh@10 -- # set +x 00:03:26.532 ************************************ 00:03:26.532 START TEST cpu_locks 00:03:26.532 ************************************ 00:03:26.532 13:47:05 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:03:26.532 * Looking for test storage... 00:03:26.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:03:26.532 13:47:05 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:26.532 13:47:05 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:03:26.532 13:47:05 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:26.532 13:47:05 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:26.532 13:47:05 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:26.532 13:47:05 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:26.532 13:47:05 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:26.532 13:47:05 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:03:26.532 13:47:05 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:03:26.532 13:47:05 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:03:26.532 13:47:05 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:03:26.532 13:47:05 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:03:26.532 13:47:05 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:03:26.532 13:47:05 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:03:26.532 13:47:05 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:26.532 13:47:05 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:03:26.532 13:47:05 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:03:26.532 13:47:05 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:26.532 13:47:05 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:26.532 13:47:05 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:03:26.532 13:47:05 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:03:26.532 13:47:05 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:26.532 13:47:05 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:03:26.532 13:47:05 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:03:26.532 13:47:05 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:03:26.532 13:47:05 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:03:26.532 13:47:05 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:26.532 13:47:05 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:03:26.532 13:47:05 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:03:26.532 13:47:05 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:26.532 13:47:05 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:26.532 13:47:05 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:03:26.532 13:47:05 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:26.532 13:47:05 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:26.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:26.532 --rc genhtml_branch_coverage=1 00:03:26.532 --rc genhtml_function_coverage=1 00:03:26.532 --rc genhtml_legend=1 00:03:26.532 --rc geninfo_all_blocks=1 00:03:26.532 --rc geninfo_unexecuted_blocks=1 00:03:26.532 00:03:26.532 ' 00:03:26.532 13:47:05 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:26.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:26.532 --rc genhtml_branch_coverage=1 00:03:26.532 --rc genhtml_function_coverage=1 00:03:26.532 --rc genhtml_legend=1 00:03:26.532 --rc geninfo_all_blocks=1 00:03:26.532 --rc geninfo_unexecuted_blocks=1 00:03:26.532 00:03:26.532 ' 00:03:26.532 13:47:05 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:26.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:26.532 --rc genhtml_branch_coverage=1 00:03:26.532 --rc genhtml_function_coverage=1 00:03:26.532 --rc genhtml_legend=1 00:03:26.532 --rc geninfo_all_blocks=1 00:03:26.532 --rc geninfo_unexecuted_blocks=1 00:03:26.532 00:03:26.532 ' 00:03:26.532 13:47:05 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:26.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:26.532 --rc genhtml_branch_coverage=1 00:03:26.532 --rc genhtml_function_coverage=1 00:03:26.532 --rc genhtml_legend=1 00:03:26.532 --rc geninfo_all_blocks=1 00:03:26.532 --rc geninfo_unexecuted_blocks=1 00:03:26.532 00:03:26.532 ' 00:03:26.532 13:47:05 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:03:26.532 13:47:05 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:03:26.532 13:47:05 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:03:26.532 13:47:05 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:03:26.532 13:47:05 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:26.532 13:47:05 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:26.532 13:47:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:03:26.532 ************************************ 00:03:26.532 START TEST default_locks 00:03:26.532 ************************************ 00:03:26.532 13:47:05 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:03:26.532 13:47:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=612146 00:03:26.532 13:47:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 612146 00:03:26.532 13:47:05 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 612146 ']' 00:03:26.532 13:47:05 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:26.532 13:47:05 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:26.532 13:47:05 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:26.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:26.532 13:47:05 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:26.532 13:47:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:03:26.532 13:47:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:26.532 [2024-11-06 13:47:05.757309] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:03:26.532 [2024-11-06 13:47:05.757355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid612146 ] 00:03:26.792 [2024-11-06 13:47:05.822525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:26.792 [2024-11-06 13:47:05.852557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:26.792 13:47:06 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:26.792 13:47:06 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:03:26.792 13:47:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 612146 00:03:26.792 13:47:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 612146 00:03:26.792 13:47:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:03:27.050 lslocks: write error 00:03:27.050 13:47:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 612146 00:03:27.050 13:47:06 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 612146 ']' 00:03:27.050 13:47:06 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 612146 00:03:27.050 13:47:06 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:03:27.050 13:47:06 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:27.050 13:47:06 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 612146 00:03:27.051 13:47:06 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:27.051 13:47:06 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:27.051 13:47:06 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 612146' 00:03:27.051 killing process with pid 612146 00:03:27.051 13:47:06 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 612146 00:03:27.051 13:47:06 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 612146 00:03:27.310 13:47:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 612146 00:03:27.310 13:47:06 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:03:27.310 13:47:06 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 612146 00:03:27.310 13:47:06 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:03:27.310 13:47:06 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:27.310 13:47:06 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:03:27.310 13:47:06 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:27.310 13:47:06 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 612146 00:03:27.310 13:47:06 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 612146 ']' 00:03:27.310 13:47:06 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:27.310 13:47:06 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:27.310 13:47:06 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:27.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:27.310 13:47:06 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:27.310 13:47:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:03:27.310 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (612146) - No such process 00:03:27.310 ERROR: process (pid: 612146) is no longer running 00:03:27.310 13:47:06 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:27.310 13:47:06 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:03:27.310 13:47:06 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:03:27.310 13:47:06 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:27.310 13:47:06 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:27.310 13:47:06 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:27.310 13:47:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:03:27.310 13:47:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:03:27.310 13:47:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:03:27.310 13:47:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:03:27.310 00:03:27.310 real 0m0.702s 00:03:27.310 user 0m0.664s 00:03:27.310 sys 0m0.361s 00:03:27.310 13:47:06 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:27.310 13:47:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:03:27.310 ************************************ 00:03:27.310 END TEST default_locks 00:03:27.310 ************************************ 00:03:27.310 13:47:06 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:03:27.310 13:47:06 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:27.310 13:47:06 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:27.310 13:47:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:03:27.310 ************************************ 00:03:27.310 START TEST default_locks_via_rpc 00:03:27.310 ************************************ 00:03:27.310 13:47:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:03:27.310 13:47:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=612192 00:03:27.310 13:47:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 612192 00:03:27.310 13:47:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 612192 ']' 00:03:27.310 13:47:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:27.310 13:47:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:27.310 13:47:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:27.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:27.310 13:47:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:27.310 13:47:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:27.310 13:47:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:27.310 [2024-11-06 13:47:06.500662] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:03:27.310 [2024-11-06 13:47:06.500710] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid612192 ] 00:03:27.310 [2024-11-06 13:47:06.565541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:27.569 [2024-11-06 13:47:06.596860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:27.569 13:47:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:27.569 13:47:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:03:27.569 13:47:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:03:27.569 13:47:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:27.569 13:47:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:27.569 13:47:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:27.569 13:47:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:03:27.569 13:47:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:03:27.569 13:47:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:03:27.569 13:47:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:03:27.569 13:47:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:03:27.570 13:47:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:27.570 13:47:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:27.570 13:47:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:27.570 13:47:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 612192 00:03:27.570 13:47:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 612192 00:03:27.570 13:47:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:03:27.829 13:47:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 612192 00:03:27.829 13:47:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 612192 ']' 00:03:27.829 13:47:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 612192 00:03:27.829 13:47:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:03:27.829 13:47:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:27.829 13:47:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 612192 00:03:27.829 13:47:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:27.829 13:47:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:27.829 13:47:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 612192' 00:03:27.829 killing process with pid 612192 00:03:27.829 13:47:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 612192 00:03:27.829 13:47:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 612192 00:03:28.087 00:03:28.087 real 0m0.707s 00:03:28.087 user 0m0.687s 00:03:28.087 sys 0m0.337s 00:03:28.087 13:47:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:28.087 13:47:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:28.087 ************************************ 00:03:28.087 END TEST default_locks_via_rpc 00:03:28.087 ************************************ 00:03:28.087 13:47:07 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:03:28.087 13:47:07 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:28.087 13:47:07 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:28.087 13:47:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:03:28.087 ************************************ 00:03:28.087 START TEST non_locking_app_on_locked_coremask 00:03:28.087 ************************************ 00:03:28.087 13:47:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:03:28.087 13:47:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=612534 00:03:28.087 13:47:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 612534 /var/tmp/spdk.sock 00:03:28.087 13:47:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:28.087 13:47:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 612534 ']' 00:03:28.087 13:47:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:28.087 13:47:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:28.087 13:47:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:28.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:28.087 13:47:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:28.087 13:47:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:03:28.087 [2024-11-06 13:47:07.250982] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:03:28.087 [2024-11-06 13:47:07.251029] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid612534 ] 00:03:28.088 [2024-11-06 13:47:07.315645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:28.088 [2024-11-06 13:47:07.346177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:28.345 13:47:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:28.345 13:47:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:03:28.345 13:47:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=612551 00:03:28.345 13:47:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 612551 /var/tmp/spdk2.sock 00:03:28.345 13:47:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 612551 ']' 00:03:28.345 13:47:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:03:28.345 13:47:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:28.345 13:47:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:03:28.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:03:28.345 13:47:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:28.345 13:47:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:03:28.346 13:47:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:03:28.346 [2024-11-06 13:47:07.548591] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:03:28.346 [2024-11-06 13:47:07.548644] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid612551 ] 00:03:28.605 [2024-11-06 13:47:07.643589] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:03:28.605 [2024-11-06 13:47:07.643609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:28.605 [2024-11-06 13:47:07.701918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:29.174 13:47:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:29.174 13:47:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:03:29.174 13:47:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 612534 00:03:29.174 13:47:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 612534 00:03:29.174 13:47:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:03:29.433 lslocks: write error 00:03:29.433 13:47:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 612534 00:03:29.433 13:47:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 612534 ']' 00:03:29.433 13:47:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 612534 00:03:29.433 13:47:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:03:29.433 13:47:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:29.433 13:47:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 612534 00:03:29.433 13:47:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:29.433 13:47:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:29.433 13:47:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 612534' 00:03:29.433 killing process with pid 612534 00:03:29.433 13:47:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 612534 00:03:29.433 13:47:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 612534 00:03:29.998 13:47:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 612551 00:03:29.998 13:47:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 612551 ']' 00:03:29.999 13:47:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 612551 00:03:29.999 13:47:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:03:29.999 13:47:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:29.999 13:47:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 612551 00:03:29.999 13:47:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:29.999 13:47:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:29.999 13:47:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 612551' 00:03:29.999 killing process with pid 612551 00:03:29.999 13:47:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 612551 00:03:29.999 13:47:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 612551 00:03:29.999 00:03:29.999 real 0m2.040s 00:03:29.999 user 0m2.180s 00:03:29.999 sys 0m0.698s 00:03:29.999 13:47:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:29.999 13:47:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:03:29.999 ************************************ 00:03:29.999 END TEST non_locking_app_on_locked_coremask 00:03:29.999 ************************************ 00:03:29.999 13:47:09 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:03:29.999 13:47:09 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:29.999 13:47:09 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:29.999 13:47:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:03:30.257 ************************************ 00:03:30.257 START TEST locking_app_on_unlocked_coremask 00:03:30.257 ************************************ 00:03:30.257 13:47:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:03:30.257 13:47:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=612922 00:03:30.257 13:47:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 612922 /var/tmp/spdk.sock 00:03:30.257 13:47:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:03:30.257 13:47:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 612922 ']' 00:03:30.257 13:47:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:30.257 13:47:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:30.257 13:47:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:30.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:30.257 13:47:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:30.257 13:47:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:03:30.257 [2024-11-06 13:47:09.347362] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:03:30.257 [2024-11-06 13:47:09.347413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid612922 ] 00:03:30.257 [2024-11-06 13:47:09.412611] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:03:30.257 [2024-11-06 13:47:09.412634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:30.257 [2024-11-06 13:47:09.441646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:30.515 13:47:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:30.515 13:47:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:03:30.515 13:47:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=612928 00:03:30.515 13:47:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 612928 /var/tmp/spdk2.sock 00:03:30.515 13:47:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 612928 ']' 00:03:30.515 13:47:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:03:30.515 13:47:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:30.515 13:47:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:03:30.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:03:30.515 13:47:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:30.515 13:47:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:03:30.515 13:47:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:03:30.515 [2024-11-06 13:47:09.648335] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:03:30.515 [2024-11-06 13:47:09.648383] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid612928 ] 00:03:30.515 [2024-11-06 13:47:09.748323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:30.774 [2024-11-06 13:47:09.807024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:31.340 13:47:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:31.340 13:47:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:03:31.340 13:47:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 612928 00:03:31.340 13:47:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 612928 00:03:31.340 13:47:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:03:31.598 lslocks: write error 00:03:31.598 13:47:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 612922 00:03:31.598 13:47:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 612922 ']' 00:03:31.598 13:47:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 612922 00:03:31.598 13:47:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:03:31.598 13:47:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:31.598 13:47:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 612922 00:03:31.598 13:47:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:31.598 13:47:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:31.598 13:47:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 612922' 00:03:31.598 killing process with pid 612922 00:03:31.598 13:47:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 612922 00:03:31.598 13:47:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 612922 00:03:31.857 13:47:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 612928 00:03:31.857 13:47:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 612928 ']' 00:03:31.857 13:47:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 612928 00:03:31.857 13:47:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:03:31.857 13:47:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:31.857 13:47:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 612928 00:03:31.857 13:47:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:31.857 13:47:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:31.857 13:47:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 612928' 00:03:31.857 killing process with pid 612928 00:03:31.857 13:47:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 612928 00:03:31.858 13:47:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 612928 00:03:32.117 00:03:32.117 real 0m2.018s 00:03:32.117 user 0m2.173s 00:03:32.117 sys 0m0.682s 00:03:32.117 13:47:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:32.117 13:47:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:03:32.117 ************************************ 00:03:32.117 END TEST locking_app_on_unlocked_coremask 00:03:32.117 ************************************ 00:03:32.117 13:47:11 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:03:32.117 13:47:11 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:32.117 13:47:11 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:32.117 13:47:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:03:32.117 ************************************ 00:03:32.117 START TEST locking_app_on_locked_coremask 00:03:32.117 ************************************ 00:03:32.117 13:47:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:03:32.117 13:47:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=613400 00:03:32.117 13:47:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 613400 /var/tmp/spdk.sock 00:03:32.118 13:47:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:32.118 13:47:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 613400 ']' 00:03:32.118 13:47:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:32.118 13:47:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:32.118 13:47:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:32.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:32.118 13:47:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:32.118 13:47:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:03:32.377 [2024-11-06 13:47:11.415205] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:03:32.377 [2024-11-06 13:47:11.415263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid613400 ] 00:03:32.377 [2024-11-06 13:47:11.481268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:32.377 [2024-11-06 13:47:11.512178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:32.636 13:47:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:32.636 13:47:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:03:32.636 13:47:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=613601 00:03:32.636 13:47:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 613601 /var/tmp/spdk2.sock 00:03:32.636 13:47:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:03:32.636 13:47:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:03:32.636 13:47:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 613601 /var/tmp/spdk2.sock 00:03:32.636 13:47:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:03:32.636 13:47:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:32.636 13:47:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:03:32.636 13:47:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:32.636 13:47:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 613601 /var/tmp/spdk2.sock 00:03:32.636 13:47:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 613601 ']' 00:03:32.636 13:47:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:03:32.636 13:47:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:32.636 13:47:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:03:32.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:03:32.636 13:47:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:32.636 13:47:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:03:32.636 [2024-11-06 13:47:11.719002] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:03:32.636 [2024-11-06 13:47:11.719054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid613601 ] 00:03:32.636 [2024-11-06 13:47:11.814799] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 613400 has claimed it. 00:03:32.636 [2024-11-06 13:47:11.814831] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:03:33.205 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (613601) - No such process 00:03:33.205 ERROR: process (pid: 613601) is no longer running 00:03:33.205 13:47:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:33.205 13:47:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:03:33.205 13:47:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:03:33.205 13:47:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:33.205 13:47:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:33.205 13:47:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:33.205 13:47:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 613400 00:03:33.205 13:47:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 613400 00:03:33.205 13:47:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:03:33.464 lslocks: write error 00:03:33.464 13:47:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 613400 00:03:33.464 13:47:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 613400 ']' 00:03:33.464 13:47:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 613400 00:03:33.464 13:47:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:03:33.464 13:47:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:33.464 13:47:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 613400 00:03:33.464 13:47:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:33.465 13:47:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:33.465 13:47:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 613400' 00:03:33.465 killing process with pid 613400 00:03:33.465 13:47:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 613400 00:03:33.465 13:47:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 613400 00:03:33.465 00:03:33.465 real 0m1.348s 00:03:33.465 user 0m1.481s 00:03:33.465 sys 0m0.416s 00:03:33.465 13:47:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:33.465 13:47:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:03:33.465 ************************************ 00:03:33.465 END TEST locking_app_on_locked_coremask 00:03:33.465 ************************************ 00:03:33.465 13:47:12 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:03:33.723 13:47:12 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:33.723 13:47:12 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:33.723 13:47:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:03:33.723 ************************************ 00:03:33.723 START TEST locking_overlapped_coremask 00:03:33.723 ************************************ 00:03:33.723 13:47:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:03:33.723 13:47:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=613686 00:03:33.723 13:47:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 613686 /var/tmp/spdk.sock 00:03:33.723 13:47:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 613686 ']' 00:03:33.723 13:47:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:33.723 13:47:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:33.723 13:47:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:33.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:33.723 13:47:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:33.723 13:47:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:03:33.723 13:47:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:03:33.723 [2024-11-06 13:47:12.811070] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:03:33.723 [2024-11-06 13:47:12.811122] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid613686 ] 00:03:33.723 [2024-11-06 13:47:12.882402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:03:33.723 [2024-11-06 13:47:12.915806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:33.723 [2024-11-06 13:47:12.915957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:33.723 [2024-11-06 13:47:12.915958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:03:34.661 13:47:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:34.661 13:47:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:03:34.661 13:47:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=614002 00:03:34.661 13:47:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 614002 /var/tmp/spdk2.sock 00:03:34.661 13:47:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:03:34.661 13:47:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 614002 /var/tmp/spdk2.sock 00:03:34.661 13:47:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:03:34.661 13:47:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:03:34.661 13:47:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:34.661 13:47:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:03:34.661 13:47:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:34.661 13:47:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 614002 /var/tmp/spdk2.sock 00:03:34.661 13:47:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 614002 ']' 00:03:34.661 13:47:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:03:34.661 13:47:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:34.661 13:47:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:03:34.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:03:34.661 13:47:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:34.661 13:47:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:03:34.661 [2024-11-06 13:47:13.625300] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:03:34.661 [2024-11-06 13:47:13.625352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid614002 ] 00:03:34.661 [2024-11-06 13:47:13.748379] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 613686 has claimed it. 00:03:34.661 [2024-11-06 13:47:13.748422] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:03:35.230 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (614002) - No such process 00:03:35.230 ERROR: process (pid: 614002) is no longer running 00:03:35.230 13:47:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:35.230 13:47:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:03:35.230 13:47:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:03:35.230 13:47:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:35.230 13:47:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:35.230 13:47:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:35.230 13:47:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:03:35.230 13:47:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:03:35.230 13:47:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:03:35.230 13:47:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:03:35.230 13:47:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 613686 00:03:35.230 13:47:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 613686 ']' 00:03:35.230 13:47:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 613686 00:03:35.230 13:47:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:03:35.230 13:47:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:35.230 13:47:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 613686 00:03:35.230 13:47:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:35.230 13:47:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:35.230 13:47:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 613686' 00:03:35.230 killing process with pid 613686 00:03:35.230 13:47:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 613686 00:03:35.230 13:47:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 613686 00:03:35.230 00:03:35.230 real 0m1.705s 00:03:35.230 user 0m4.955s 00:03:35.230 sys 0m0.337s 00:03:35.230 13:47:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:35.230 13:47:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:03:35.230 ************************************ 00:03:35.230 END TEST locking_overlapped_coremask 00:03:35.230 ************************************ 00:03:35.230 13:47:14 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:03:35.230 13:47:14 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:35.230 13:47:14 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:35.230 13:47:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:03:35.490 ************************************ 00:03:35.490 START TEST locking_overlapped_coremask_via_rpc 00:03:35.490 ************************************ 00:03:35.490 13:47:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:03:35.490 13:47:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=614196 00:03:35.490 13:47:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 614196 /var/tmp/spdk.sock 00:03:35.490 13:47:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 614196 ']' 00:03:35.490 13:47:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:35.490 13:47:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:35.490 13:47:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:35.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:35.490 13:47:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:35.490 13:47:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:35.490 13:47:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:03:35.490 [2024-11-06 13:47:14.564411] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:03:35.490 [2024-11-06 13:47:14.564465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid614196 ] 00:03:35.490 [2024-11-06 13:47:14.630661] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:03:35.490 [2024-11-06 13:47:14.630685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:03:35.490 [2024-11-06 13:47:14.663185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:35.490 [2024-11-06 13:47:14.663313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:03:35.490 [2024-11-06 13:47:14.663495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:35.750 13:47:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:35.750 13:47:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:03:35.750 13:47:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=614367 00:03:35.750 13:47:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 614367 /var/tmp/spdk2.sock 00:03:35.750 13:47:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 614367 ']' 00:03:35.750 13:47:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:03:35.750 13:47:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:35.750 13:47:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:03:35.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:03:35.750 13:47:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:35.750 13:47:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:35.750 13:47:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:03:35.750 [2024-11-06 13:47:14.872838] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:03:35.750 [2024-11-06 13:47:14.872887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid614367 ] 00:03:35.750 [2024-11-06 13:47:14.970929] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:03:35.750 [2024-11-06 13:47:14.970953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:03:35.750 [2024-11-06 13:47:15.030171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:03:35.750 [2024-11-06 13:47:15.033399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:03:35.750 [2024-11-06 13:47:15.033401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:03:36.688 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:36.688 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:03:36.688 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:03:36.688 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:36.688 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:36.688 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:36.688 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:03:36.688 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:03:36.688 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:03:36.688 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:03:36.688 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:36.688 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:03:36.688 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:36.688 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:03:36.688 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:36.688 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:36.688 [2024-11-06 13:47:15.662299] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 614196 has claimed it. 00:03:36.688 request: 00:03:36.688 { 00:03:36.688 "method": "framework_enable_cpumask_locks", 00:03:36.688 "req_id": 1 00:03:36.688 } 00:03:36.688 Got JSON-RPC error response 00:03:36.688 response: 00:03:36.688 { 00:03:36.688 "code": -32603, 00:03:36.688 "message": "Failed to claim CPU core: 2" 00:03:36.688 } 00:03:36.688 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:36.688 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:03:36.688 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:36.688 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:36.688 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:36.688 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 614196 /var/tmp/spdk.sock 00:03:36.688 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 614196 ']' 00:03:36.688 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:36.688 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:36.688 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:36.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:36.688 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:36.688 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:36.688 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:36.688 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:03:36.688 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 614367 /var/tmp/spdk2.sock 00:03:36.688 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 614367 ']' 00:03:36.688 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:03:36.688 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:36.688 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:03:36.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:03:36.688 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:36.688 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:36.948 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:36.948 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:03:36.948 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:03:36.948 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:03:36.948 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:03:36.948 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:03:36.948 00:03:36.948 real 0m1.474s 00:03:36.948 user 0m0.655s 00:03:36.948 sys 0m0.104s 00:03:36.948 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:36.948 13:47:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:36.948 ************************************ 00:03:36.948 END TEST locking_overlapped_coremask_via_rpc 00:03:36.948 ************************************ 00:03:36.948 13:47:16 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:03:36.948 13:47:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 614196 ]] 00:03:36.948 13:47:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 614196 00:03:36.948 13:47:16 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 614196 ']' 00:03:36.948 13:47:16 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 614196 00:03:36.948 13:47:16 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:03:36.948 13:47:16 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:36.948 13:47:16 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 614196 00:03:36.948 13:47:16 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:36.948 13:47:16 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:36.948 13:47:16 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 614196' 00:03:36.948 killing process with pid 614196 00:03:36.948 13:47:16 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 614196 00:03:36.948 13:47:16 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 614196 00:03:37.207 13:47:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 614367 ]] 00:03:37.207 13:47:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 614367 00:03:37.207 13:47:16 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 614367 ']' 00:03:37.207 13:47:16 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 614367 00:03:37.207 13:47:16 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:03:37.207 13:47:16 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:37.207 13:47:16 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 614367 00:03:37.207 13:47:16 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:03:37.207 13:47:16 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:03:37.207 13:47:16 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 614367' 00:03:37.207 killing process with pid 614367 00:03:37.207 13:47:16 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 614367 00:03:37.207 13:47:16 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 614367 00:03:37.468 13:47:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:03:37.468 13:47:16 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:03:37.468 13:47:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 614196 ]] 00:03:37.468 13:47:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 614196 00:03:37.468 13:47:16 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 614196 ']' 00:03:37.468 13:47:16 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 614196 00:03:37.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (614196) - No such process 00:03:37.468 13:47:16 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 614196 is not found' 00:03:37.468 Process with pid 614196 is not found 00:03:37.468 13:47:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 614367 ]] 00:03:37.468 13:47:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 614367 00:03:37.468 13:47:16 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 614367 ']' 00:03:37.468 13:47:16 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 614367 00:03:37.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (614367) - No such process 00:03:37.468 13:47:16 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 614367 is not found' 00:03:37.468 Process with pid 614367 is not found 00:03:37.468 13:47:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:03:37.468 00:03:37.468 real 0m10.931s 00:03:37.468 user 0m20.683s 00:03:37.468 sys 0m3.663s 00:03:37.468 13:47:16 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:37.468 13:47:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:03:37.468 ************************************ 00:03:37.468 END TEST cpu_locks 00:03:37.468 ************************************ 00:03:37.468 00:03:37.468 real 0m32.771s 00:03:37.468 user 1m5.133s 00:03:37.468 sys 0m6.275s 00:03:37.468 13:47:16 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:37.468 13:47:16 event -- common/autotest_common.sh@10 -- # set +x 00:03:37.468 ************************************ 00:03:37.468 END TEST event 00:03:37.468 ************************************ 00:03:37.468 13:47:16 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:03:37.468 13:47:16 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:37.468 13:47:16 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:37.468 13:47:16 -- common/autotest_common.sh@10 -- # set +x 00:03:37.468 ************************************ 00:03:37.468 START TEST thread 00:03:37.468 ************************************ 00:03:37.468 13:47:16 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:03:37.468 * Looking for test storage... 00:03:37.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:03:37.468 13:47:16 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:37.468 13:47:16 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:03:37.468 13:47:16 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:37.468 13:47:16 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:37.468 13:47:16 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:37.468 13:47:16 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:37.468 13:47:16 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:37.468 13:47:16 thread -- scripts/common.sh@336 -- # IFS=.-: 00:03:37.468 13:47:16 thread -- scripts/common.sh@336 -- # read -ra ver1 00:03:37.468 13:47:16 thread -- scripts/common.sh@337 -- # IFS=.-: 00:03:37.468 13:47:16 thread -- scripts/common.sh@337 -- # read -ra ver2 00:03:37.468 13:47:16 thread -- scripts/common.sh@338 -- # local 'op=<' 00:03:37.468 13:47:16 thread -- scripts/common.sh@340 -- # ver1_l=2 00:03:37.468 13:47:16 thread -- scripts/common.sh@341 -- # ver2_l=1 00:03:37.468 13:47:16 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:37.468 13:47:16 thread -- scripts/common.sh@344 -- # case "$op" in 00:03:37.468 13:47:16 thread -- scripts/common.sh@345 -- # : 1 00:03:37.468 13:47:16 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:37.468 13:47:16 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:37.468 13:47:16 thread -- scripts/common.sh@365 -- # decimal 1 00:03:37.468 13:47:16 thread -- scripts/common.sh@353 -- # local d=1 00:03:37.468 13:47:16 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:37.468 13:47:16 thread -- scripts/common.sh@355 -- # echo 1 00:03:37.468 13:47:16 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:03:37.468 13:47:16 thread -- scripts/common.sh@366 -- # decimal 2 00:03:37.468 13:47:16 thread -- scripts/common.sh@353 -- # local d=2 00:03:37.468 13:47:16 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:37.468 13:47:16 thread -- scripts/common.sh@355 -- # echo 2 00:03:37.468 13:47:16 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:03:37.468 13:47:16 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:37.468 13:47:16 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:37.468 13:47:16 thread -- scripts/common.sh@368 -- # return 0 00:03:37.468 13:47:16 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:37.468 13:47:16 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:37.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.468 --rc genhtml_branch_coverage=1 00:03:37.468 --rc genhtml_function_coverage=1 00:03:37.468 --rc genhtml_legend=1 00:03:37.468 --rc geninfo_all_blocks=1 00:03:37.468 --rc geninfo_unexecuted_blocks=1 00:03:37.468 00:03:37.468 ' 00:03:37.468 13:47:16 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:37.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.468 --rc genhtml_branch_coverage=1 00:03:37.468 --rc genhtml_function_coverage=1 00:03:37.468 --rc genhtml_legend=1 00:03:37.468 --rc geninfo_all_blocks=1 00:03:37.468 --rc geninfo_unexecuted_blocks=1 00:03:37.468 00:03:37.468 ' 00:03:37.468 13:47:16 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:37.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.468 --rc genhtml_branch_coverage=1 00:03:37.468 --rc genhtml_function_coverage=1 00:03:37.468 --rc genhtml_legend=1 00:03:37.468 --rc geninfo_all_blocks=1 00:03:37.468 --rc geninfo_unexecuted_blocks=1 00:03:37.468 00:03:37.468 ' 00:03:37.468 13:47:16 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:37.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.468 --rc genhtml_branch_coverage=1 00:03:37.468 --rc genhtml_function_coverage=1 00:03:37.468 --rc genhtml_legend=1 00:03:37.468 --rc geninfo_all_blocks=1 00:03:37.468 --rc geninfo_unexecuted_blocks=1 00:03:37.468 00:03:37.468 ' 00:03:37.468 13:47:16 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:03:37.468 13:47:16 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:03:37.468 13:47:16 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:37.468 13:47:16 thread -- common/autotest_common.sh@10 -- # set +x 00:03:37.468 ************************************ 00:03:37.468 START TEST thread_poller_perf 00:03:37.468 ************************************ 00:03:37.468 13:47:16 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:03:37.468 [2024-11-06 13:47:16.720616] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:03:37.469 [2024-11-06 13:47:16.720664] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid614810 ] 00:03:37.728 [2024-11-06 13:47:16.788082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:37.728 [2024-11-06 13:47:16.822179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:37.728 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:03:38.676 [2024-11-06T12:47:17.960Z] ====================================== 00:03:38.676 [2024-11-06T12:47:17.960Z] busy:2407916692 (cyc) 00:03:38.676 [2024-11-06T12:47:17.960Z] total_run_count: 419000 00:03:38.676 [2024-11-06T12:47:17.960Z] tsc_hz: 2400000000 (cyc) 00:03:38.676 [2024-11-06T12:47:17.960Z] ====================================== 00:03:38.676 [2024-11-06T12:47:17.960Z] poller_cost: 5746 (cyc), 2394 (nsec) 00:03:38.676 00:03:38.676 real 0m1.143s 00:03:38.676 user 0m1.079s 00:03:38.676 sys 0m0.060s 00:03:38.676 13:47:17 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:38.676 13:47:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:03:38.676 ************************************ 00:03:38.676 END TEST thread_poller_perf 00:03:38.676 ************************************ 00:03:38.676 13:47:17 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:03:38.676 13:47:17 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:03:38.676 13:47:17 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:38.676 13:47:17 thread -- common/autotest_common.sh@10 -- # set +x 00:03:38.676 ************************************ 00:03:38.676 START TEST thread_poller_perf 00:03:38.676 ************************************ 00:03:38.676 13:47:17 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:03:38.676 [2024-11-06 13:47:17.904357] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:03:38.676 [2024-11-06 13:47:17.904392] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid615158 ] 00:03:38.676 [2024-11-06 13:47:17.958923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:38.980 [2024-11-06 13:47:17.987678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:38.980 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:03:39.918 [2024-11-06T12:47:19.202Z] ====================================== 00:03:39.918 [2024-11-06T12:47:19.202Z] busy:2401613006 (cyc) 00:03:39.919 [2024-11-06T12:47:19.203Z] total_run_count: 5559000 00:03:39.919 [2024-11-06T12:47:19.203Z] tsc_hz: 2400000000 (cyc) 00:03:39.919 [2024-11-06T12:47:19.203Z] ====================================== 00:03:39.919 [2024-11-06T12:47:19.203Z] poller_cost: 432 (cyc), 180 (nsec) 00:03:39.919 00:03:39.919 real 0m1.116s 00:03:39.919 user 0m1.064s 00:03:39.919 sys 0m0.050s 00:03:39.919 13:47:19 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:39.919 13:47:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:03:39.919 ************************************ 00:03:39.919 END TEST thread_poller_perf 00:03:39.919 ************************************ 00:03:39.919 13:47:19 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:03:39.919 00:03:39.919 real 0m2.467s 00:03:39.919 user 0m2.244s 00:03:39.919 sys 0m0.229s 00:03:39.919 13:47:19 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:39.919 13:47:19 thread -- common/autotest_common.sh@10 -- # set +x 00:03:39.919 ************************************ 00:03:39.919 END TEST thread 00:03:39.919 ************************************ 00:03:39.919 13:47:19 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:03:39.919 13:47:19 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:03:39.919 13:47:19 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:39.919 13:47:19 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:39.919 13:47:19 -- common/autotest_common.sh@10 -- # set +x 00:03:39.919 ************************************ 00:03:39.919 START TEST app_cmdline 00:03:39.919 ************************************ 00:03:39.919 13:47:19 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:03:39.919 * Looking for test storage... 00:03:39.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:03:39.919 13:47:19 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:39.919 13:47:19 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:39.919 13:47:19 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:03:39.919 13:47:19 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:39.919 13:47:19 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:39.919 13:47:19 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:39.919 13:47:19 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:39.919 13:47:19 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:03:39.919 13:47:19 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:03:39.919 13:47:19 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:03:39.919 13:47:19 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:03:39.919 13:47:19 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:03:39.919 13:47:19 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:03:39.919 13:47:19 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:03:39.919 13:47:19 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:39.919 13:47:19 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:03:39.919 13:47:19 app_cmdline -- scripts/common.sh@345 -- # : 1 00:03:39.919 13:47:19 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:39.919 13:47:19 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:39.919 13:47:19 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:03:39.919 13:47:19 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:03:39.919 13:47:19 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:39.919 13:47:19 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:03:39.919 13:47:19 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:03:39.919 13:47:19 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:03:39.919 13:47:19 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:03:39.919 13:47:19 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:39.919 13:47:19 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:03:39.919 13:47:19 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:03:39.919 13:47:19 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:39.919 13:47:19 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:39.919 13:47:19 app_cmdline -- scripts/common.sh@368 -- # return 0 00:03:39.919 13:47:19 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:39.919 13:47:19 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:39.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.919 --rc genhtml_branch_coverage=1 00:03:39.919 --rc genhtml_function_coverage=1 00:03:39.919 --rc genhtml_legend=1 00:03:39.919 --rc geninfo_all_blocks=1 00:03:39.919 --rc geninfo_unexecuted_blocks=1 00:03:39.919 00:03:39.919 ' 00:03:39.919 13:47:19 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:39.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.919 --rc genhtml_branch_coverage=1 00:03:39.919 --rc genhtml_function_coverage=1 00:03:39.919 --rc genhtml_legend=1 00:03:39.919 --rc geninfo_all_blocks=1 00:03:39.919 --rc geninfo_unexecuted_blocks=1 00:03:39.919 00:03:39.919 ' 00:03:39.919 13:47:19 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:39.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.919 --rc genhtml_branch_coverage=1 00:03:39.919 --rc genhtml_function_coverage=1 00:03:39.919 --rc genhtml_legend=1 00:03:39.919 --rc geninfo_all_blocks=1 00:03:39.919 --rc geninfo_unexecuted_blocks=1 00:03:39.919 00:03:39.919 ' 00:03:39.919 13:47:19 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:39.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.919 --rc genhtml_branch_coverage=1 00:03:39.919 --rc genhtml_function_coverage=1 00:03:39.919 --rc genhtml_legend=1 00:03:39.919 --rc geninfo_all_blocks=1 00:03:39.919 --rc geninfo_unexecuted_blocks=1 00:03:39.919 00:03:39.919 ' 00:03:39.919 13:47:19 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:03:39.919 13:47:19 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=615445 00:03:39.919 13:47:19 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 615445 00:03:39.919 13:47:19 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 615445 ']' 00:03:39.919 13:47:19 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:39.919 13:47:19 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:39.919 13:47:19 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:39.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:39.919 13:47:19 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:39.919 13:47:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:03:39.919 13:47:19 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:03:40.179 [2024-11-06 13:47:19.242119] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:03:40.179 [2024-11-06 13:47:19.242184] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid615445 ] 00:03:40.179 [2024-11-06 13:47:19.308974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:40.179 [2024-11-06 13:47:19.340951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:40.438 13:47:19 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:40.438 13:47:19 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:03:40.438 13:47:19 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:03:40.438 { 00:03:40.438 "version": "SPDK v25.01-pre git sha1 b7ef84b3d", 00:03:40.438 "fields": { 00:03:40.438 "major": 25, 00:03:40.438 "minor": 1, 00:03:40.438 "patch": 0, 00:03:40.438 "suffix": "-pre", 00:03:40.438 "commit": "b7ef84b3d" 00:03:40.438 } 00:03:40.438 } 00:03:40.438 13:47:19 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:03:40.438 13:47:19 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:03:40.438 13:47:19 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:03:40.438 13:47:19 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:03:40.438 13:47:19 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:03:40.438 13:47:19 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:03:40.438 13:47:19 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.438 13:47:19 app_cmdline -- app/cmdline.sh@26 -- # sort 00:03:40.438 13:47:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:03:40.438 13:47:19 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.438 13:47:19 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:03:40.438 13:47:19 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:03:40.438 13:47:19 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:03:40.438 13:47:19 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:03:40.438 13:47:19 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:03:40.438 13:47:19 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:03:40.438 13:47:19 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:40.438 13:47:19 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:03:40.438 13:47:19 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:40.438 13:47:19 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:03:40.439 13:47:19 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:40.439 13:47:19 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:03:40.439 13:47:19 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:03:40.439 13:47:19 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:03:40.697 request: 00:03:40.697 { 00:03:40.697 "method": "env_dpdk_get_mem_stats", 00:03:40.697 "req_id": 1 00:03:40.697 } 00:03:40.697 Got JSON-RPC error response 00:03:40.697 response: 00:03:40.697 { 00:03:40.697 "code": -32601, 00:03:40.697 "message": "Method not found" 00:03:40.697 } 00:03:40.697 13:47:19 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:03:40.697 13:47:19 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:40.697 13:47:19 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:40.697 13:47:19 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:40.697 13:47:19 app_cmdline -- app/cmdline.sh@1 -- # killprocess 615445 00:03:40.697 13:47:19 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 615445 ']' 00:03:40.697 13:47:19 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 615445 00:03:40.697 13:47:19 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:03:40.697 13:47:19 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:40.697 13:47:19 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 615445 00:03:40.697 13:47:19 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:40.697 13:47:19 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:40.697 13:47:19 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 615445' 00:03:40.697 killing process with pid 615445 00:03:40.697 13:47:19 app_cmdline -- common/autotest_common.sh@971 -- # kill 615445 00:03:40.697 13:47:19 app_cmdline -- common/autotest_common.sh@976 -- # wait 615445 00:03:40.957 00:03:40.957 real 0m1.012s 00:03:40.957 user 0m1.199s 00:03:40.957 sys 0m0.331s 00:03:40.957 13:47:20 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:40.957 13:47:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:03:40.957 ************************************ 00:03:40.957 END TEST app_cmdline 00:03:40.957 ************************************ 00:03:40.957 13:47:20 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:03:40.957 13:47:20 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:40.957 13:47:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:40.957 13:47:20 -- common/autotest_common.sh@10 -- # set +x 00:03:40.957 ************************************ 00:03:40.957 START TEST version 00:03:40.957 ************************************ 00:03:40.957 13:47:20 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:03:40.957 * Looking for test storage... 00:03:40.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:03:40.957 13:47:20 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:40.957 13:47:20 version -- common/autotest_common.sh@1691 -- # lcov --version 00:03:40.957 13:47:20 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:41.218 13:47:20 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:41.218 13:47:20 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:41.218 13:47:20 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:41.218 13:47:20 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:41.218 13:47:20 version -- scripts/common.sh@336 -- # IFS=.-: 00:03:41.218 13:47:20 version -- scripts/common.sh@336 -- # read -ra ver1 00:03:41.218 13:47:20 version -- scripts/common.sh@337 -- # IFS=.-: 00:03:41.218 13:47:20 version -- scripts/common.sh@337 -- # read -ra ver2 00:03:41.218 13:47:20 version -- scripts/common.sh@338 -- # local 'op=<' 00:03:41.218 13:47:20 version -- scripts/common.sh@340 -- # ver1_l=2 00:03:41.218 13:47:20 version -- scripts/common.sh@341 -- # ver2_l=1 00:03:41.218 13:47:20 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:41.218 13:47:20 version -- scripts/common.sh@344 -- # case "$op" in 00:03:41.218 13:47:20 version -- scripts/common.sh@345 -- # : 1 00:03:41.218 13:47:20 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:41.218 13:47:20 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:41.218 13:47:20 version -- scripts/common.sh@365 -- # decimal 1 00:03:41.218 13:47:20 version -- scripts/common.sh@353 -- # local d=1 00:03:41.218 13:47:20 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:41.218 13:47:20 version -- scripts/common.sh@355 -- # echo 1 00:03:41.218 13:47:20 version -- scripts/common.sh@365 -- # ver1[v]=1 00:03:41.218 13:47:20 version -- scripts/common.sh@366 -- # decimal 2 00:03:41.218 13:47:20 version -- scripts/common.sh@353 -- # local d=2 00:03:41.218 13:47:20 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:41.218 13:47:20 version -- scripts/common.sh@355 -- # echo 2 00:03:41.218 13:47:20 version -- scripts/common.sh@366 -- # ver2[v]=2 00:03:41.218 13:47:20 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:41.218 13:47:20 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:41.218 13:47:20 version -- scripts/common.sh@368 -- # return 0 00:03:41.218 13:47:20 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:41.218 13:47:20 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:41.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.218 --rc genhtml_branch_coverage=1 00:03:41.218 --rc genhtml_function_coverage=1 00:03:41.218 --rc genhtml_legend=1 00:03:41.218 --rc geninfo_all_blocks=1 00:03:41.218 --rc geninfo_unexecuted_blocks=1 00:03:41.218 00:03:41.218 ' 00:03:41.218 13:47:20 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:41.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.218 --rc genhtml_branch_coverage=1 00:03:41.218 --rc genhtml_function_coverage=1 00:03:41.218 --rc genhtml_legend=1 00:03:41.218 --rc geninfo_all_blocks=1 00:03:41.218 --rc geninfo_unexecuted_blocks=1 00:03:41.218 00:03:41.218 ' 00:03:41.218 13:47:20 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:41.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.218 --rc genhtml_branch_coverage=1 00:03:41.218 --rc genhtml_function_coverage=1 00:03:41.218 --rc genhtml_legend=1 00:03:41.218 --rc geninfo_all_blocks=1 00:03:41.218 --rc geninfo_unexecuted_blocks=1 00:03:41.218 00:03:41.218 ' 00:03:41.218 13:47:20 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:41.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.218 --rc genhtml_branch_coverage=1 00:03:41.218 --rc genhtml_function_coverage=1 00:03:41.218 --rc genhtml_legend=1 00:03:41.218 --rc geninfo_all_blocks=1 00:03:41.218 --rc geninfo_unexecuted_blocks=1 00:03:41.218 00:03:41.218 ' 00:03:41.219 13:47:20 version -- app/version.sh@17 -- # get_header_version major 00:03:41.219 13:47:20 version -- app/version.sh@14 -- # tr -d '"' 00:03:41.219 13:47:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:03:41.219 13:47:20 version -- app/version.sh@14 -- # cut -f2 00:03:41.219 13:47:20 version -- app/version.sh@17 -- # major=25 00:03:41.219 13:47:20 version -- app/version.sh@18 -- # get_header_version minor 00:03:41.219 13:47:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:03:41.219 13:47:20 version -- app/version.sh@14 -- # cut -f2 00:03:41.219 13:47:20 version -- app/version.sh@14 -- # tr -d '"' 00:03:41.219 13:47:20 version -- app/version.sh@18 -- # minor=1 00:03:41.219 13:47:20 version -- app/version.sh@19 -- # get_header_version patch 00:03:41.219 13:47:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:03:41.219 13:47:20 version -- app/version.sh@14 -- # cut -f2 00:03:41.219 13:47:20 version -- app/version.sh@14 -- # tr -d '"' 00:03:41.219 13:47:20 version -- app/version.sh@19 -- # patch=0 00:03:41.219 13:47:20 version -- app/version.sh@20 -- # get_header_version suffix 00:03:41.219 13:47:20 version -- app/version.sh@14 -- # cut -f2 00:03:41.219 13:47:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:03:41.219 13:47:20 version -- app/version.sh@14 -- # tr -d '"' 00:03:41.219 13:47:20 version -- app/version.sh@20 -- # suffix=-pre 00:03:41.219 13:47:20 version -- app/version.sh@22 -- # version=25.1 00:03:41.219 13:47:20 version -- app/version.sh@25 -- # (( patch != 0 )) 00:03:41.219 13:47:20 version -- app/version.sh@28 -- # version=25.1rc0 00:03:41.219 13:47:20 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:03:41.219 13:47:20 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:03:41.219 13:47:20 version -- app/version.sh@30 -- # py_version=25.1rc0 00:03:41.219 13:47:20 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:03:41.219 00:03:41.219 real 0m0.173s 00:03:41.219 user 0m0.104s 00:03:41.219 sys 0m0.092s 00:03:41.219 13:47:20 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:41.219 13:47:20 version -- common/autotest_common.sh@10 -- # set +x 00:03:41.219 ************************************ 00:03:41.219 END TEST version 00:03:41.219 ************************************ 00:03:41.219 13:47:20 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:03:41.219 13:47:20 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:03:41.219 13:47:20 -- spdk/autotest.sh@194 -- # uname -s 00:03:41.219 13:47:20 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:03:41.219 13:47:20 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:03:41.219 13:47:20 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:03:41.219 13:47:20 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:03:41.219 13:47:20 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:03:41.219 13:47:20 -- spdk/autotest.sh@256 -- # timing_exit lib 00:03:41.219 13:47:20 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:41.219 13:47:20 -- common/autotest_common.sh@10 -- # set +x 00:03:41.219 13:47:20 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:03:41.219 13:47:20 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:03:41.219 13:47:20 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:03:41.219 13:47:20 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:03:41.219 13:47:20 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:03:41.219 13:47:20 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:03:41.219 13:47:20 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:03:41.219 13:47:20 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:03:41.219 13:47:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:41.219 13:47:20 -- common/autotest_common.sh@10 -- # set +x 00:03:41.219 ************************************ 00:03:41.219 START TEST nvmf_tcp 00:03:41.219 ************************************ 00:03:41.219 13:47:20 nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:03:41.219 * Looking for test storage... 00:03:41.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:03:41.219 13:47:20 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:41.219 13:47:20 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:03:41.219 13:47:20 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:41.219 13:47:20 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:41.219 13:47:20 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:41.219 13:47:20 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:41.219 13:47:20 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:41.219 13:47:20 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:03:41.219 13:47:20 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:03:41.219 13:47:20 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:03:41.219 13:47:20 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:03:41.219 13:47:20 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:03:41.219 13:47:20 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:03:41.219 13:47:20 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:03:41.219 13:47:20 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:41.219 13:47:20 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:03:41.219 13:47:20 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:03:41.219 13:47:20 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:41.219 13:47:20 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:41.219 13:47:20 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:03:41.219 13:47:20 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:03:41.219 13:47:20 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:41.219 13:47:20 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:03:41.219 13:47:20 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:03:41.219 13:47:20 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:03:41.219 13:47:20 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:03:41.219 13:47:20 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:41.219 13:47:20 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:03:41.480 13:47:20 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:03:41.480 13:47:20 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:41.480 13:47:20 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:41.480 13:47:20 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:03:41.480 13:47:20 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:41.480 13:47:20 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:41.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.480 --rc genhtml_branch_coverage=1 00:03:41.480 --rc genhtml_function_coverage=1 00:03:41.480 --rc genhtml_legend=1 00:03:41.480 --rc geninfo_all_blocks=1 00:03:41.480 --rc geninfo_unexecuted_blocks=1 00:03:41.480 00:03:41.480 ' 00:03:41.480 13:47:20 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:41.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.480 --rc genhtml_branch_coverage=1 00:03:41.480 --rc genhtml_function_coverage=1 00:03:41.480 --rc genhtml_legend=1 00:03:41.480 --rc geninfo_all_blocks=1 00:03:41.480 --rc geninfo_unexecuted_blocks=1 00:03:41.480 00:03:41.480 ' 00:03:41.480 13:47:20 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:41.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.480 --rc genhtml_branch_coverage=1 00:03:41.480 --rc genhtml_function_coverage=1 00:03:41.480 --rc genhtml_legend=1 00:03:41.480 --rc geninfo_all_blocks=1 00:03:41.480 --rc geninfo_unexecuted_blocks=1 00:03:41.480 00:03:41.480 ' 00:03:41.480 13:47:20 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:41.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.480 --rc genhtml_branch_coverage=1 00:03:41.480 --rc genhtml_function_coverage=1 00:03:41.480 --rc genhtml_legend=1 00:03:41.480 --rc geninfo_all_blocks=1 00:03:41.480 --rc geninfo_unexecuted_blocks=1 00:03:41.480 00:03:41.480 ' 00:03:41.480 13:47:20 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:03:41.480 13:47:20 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:03:41.480 13:47:20 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:03:41.480 13:47:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:03:41.480 13:47:20 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:41.480 13:47:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:41.480 ************************************ 00:03:41.480 START TEST nvmf_target_core 00:03:41.480 ************************************ 00:03:41.480 13:47:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:03:41.481 * Looking for test storage... 00:03:41.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:41.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.481 --rc genhtml_branch_coverage=1 00:03:41.481 --rc genhtml_function_coverage=1 00:03:41.481 --rc genhtml_legend=1 00:03:41.481 --rc geninfo_all_blocks=1 00:03:41.481 --rc geninfo_unexecuted_blocks=1 00:03:41.481 00:03:41.481 ' 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:41.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.481 --rc genhtml_branch_coverage=1 00:03:41.481 --rc genhtml_function_coverage=1 00:03:41.481 --rc genhtml_legend=1 00:03:41.481 --rc geninfo_all_blocks=1 00:03:41.481 --rc geninfo_unexecuted_blocks=1 00:03:41.481 00:03:41.481 ' 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:41.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.481 --rc genhtml_branch_coverage=1 00:03:41.481 --rc genhtml_function_coverage=1 00:03:41.481 --rc genhtml_legend=1 00:03:41.481 --rc geninfo_all_blocks=1 00:03:41.481 --rc geninfo_unexecuted_blocks=1 00:03:41.481 00:03:41.481 ' 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:41.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.481 --rc genhtml_branch_coverage=1 00:03:41.481 --rc genhtml_function_coverage=1 00:03:41.481 --rc genhtml_legend=1 00:03:41.481 --rc geninfo_all_blocks=1 00:03:41.481 --rc geninfo_unexecuted_blocks=1 00:03:41.481 00:03:41.481 ' 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:41.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:03:41.481 ************************************ 00:03:41.481 START TEST nvmf_abort 00:03:41.481 ************************************ 00:03:41.481 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:03:41.481 * Looking for test storage... 00:03:41.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:41.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.742 --rc genhtml_branch_coverage=1 00:03:41.742 --rc genhtml_function_coverage=1 00:03:41.742 --rc genhtml_legend=1 00:03:41.742 --rc geninfo_all_blocks=1 00:03:41.742 --rc geninfo_unexecuted_blocks=1 00:03:41.742 00:03:41.742 ' 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:41.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.742 --rc genhtml_branch_coverage=1 00:03:41.742 --rc genhtml_function_coverage=1 00:03:41.742 --rc genhtml_legend=1 00:03:41.742 --rc geninfo_all_blocks=1 00:03:41.742 --rc geninfo_unexecuted_blocks=1 00:03:41.742 00:03:41.742 ' 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:41.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.742 --rc genhtml_branch_coverage=1 00:03:41.742 --rc genhtml_function_coverage=1 00:03:41.742 --rc genhtml_legend=1 00:03:41.742 --rc geninfo_all_blocks=1 00:03:41.742 --rc geninfo_unexecuted_blocks=1 00:03:41.742 00:03:41.742 ' 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:41.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.742 --rc genhtml_branch_coverage=1 00:03:41.742 --rc genhtml_function_coverage=1 00:03:41.742 --rc genhtml_legend=1 00:03:41.742 --rc geninfo_all_blocks=1 00:03:41.742 --rc geninfo_unexecuted_blocks=1 00:03:41.742 00:03:41.742 ' 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:41.742 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:41.743 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.743 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.743 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.743 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:03:41.743 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.743 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:03:41.743 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:41.743 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:41.743 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:41.743 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:41.743 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:41.743 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:41.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:41.743 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:41.743 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:41.743 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:41.743 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:03:41.743 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:03:41.743 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:03:41.743 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:03:41.743 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:03:41.743 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:03:41.743 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:03:41.743 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:03:41.743 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:03:41.743 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:03:41.743 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:03:41.743 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:03:41.743 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:03:41.743 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:03:41.743 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:03:48.313 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:03:48.313 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:03:48.313 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:03:48.313 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:03:48.313 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:03:48.313 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:03:48.313 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:03:48.313 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:03:48.313 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:03:48.313 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:03:48.313 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:03:48.313 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:03:48.313 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:03:48.313 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:03:48.313 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:03:48.313 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:03:48.313 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:03:48.313 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:03:48.313 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:03:48.313 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:03:48.313 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:03:48.313 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:03:48.313 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:03:48.313 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:03:48.313 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:03:48.313 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:03:48.313 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:03:48.313 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:03:48.313 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:03:48.313 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:03:48.313 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:03:48.314 Found 0000:31:00.0 (0x8086 - 0x159b) 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:03:48.314 Found 0000:31:00.1 (0x8086 - 0x159b) 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:03:48.314 Found net devices under 0000:31:00.0: cvl_0_0 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:03:48.314 Found net devices under 0000:31:00.1: cvl_0_1 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:03:48.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:03:48.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:03:48.314 00:03:48.314 --- 10.0.0.2 ping statistics --- 00:03:48.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:03:48.314 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:03:48.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:03:48.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:03:48.314 00:03:48.314 --- 10.0.0.1 ping statistics --- 00:03:48.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:03:48.314 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=619871 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 619871 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 619871 ']' 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:48.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:48.314 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:03:48.314 [2024-11-06 13:47:26.844265] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:03:48.314 [2024-11-06 13:47:26.844312] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:03:48.314 [2024-11-06 13:47:26.925036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:03:48.314 [2024-11-06 13:47:26.979451] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:03:48.314 [2024-11-06 13:47:26.979503] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:03:48.314 [2024-11-06 13:47:26.979512] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:48.314 [2024-11-06 13:47:26.979519] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:48.314 [2024-11-06 13:47:26.979526] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:03:48.314 [2024-11-06 13:47:26.981416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:03:48.314 [2024-11-06 13:47:26.981747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:03:48.314 [2024-11-06 13:47:26.981749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:48.575 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:48.575 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:03:48.575 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:03:48.575 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:48.575 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:03:48.575 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:03:48.575 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:03:48.575 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.575 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:03:48.575 [2024-11-06 13:47:27.686608] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:48.575 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.575 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:03:48.575 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.575 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:03:48.575 Malloc0 00:03:48.575 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.575 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:03:48.575 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.575 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:03:48.575 Delay0 00:03:48.575 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.575 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:03:48.575 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.575 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:03:48.575 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.575 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:03:48.575 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.575 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:03:48.575 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.575 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:03:48.575 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.575 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:03:48.575 [2024-11-06 13:47:27.748926] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:03:48.575 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.575 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:03:48.575 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.575 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:03:48.575 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.575 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:03:48.575 [2024-11-06 13:47:27.857703] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:03:51.112 Initializing NVMe Controllers 00:03:51.112 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:03:51.112 controller IO queue size 128 less than required 00:03:51.112 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:03:51.112 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:03:51.112 Initialization complete. Launching workers. 00:03:51.112 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 29834 00:03:51.112 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29899, failed to submit 62 00:03:51.112 success 29838, unsuccessful 61, failed 0 00:03:51.112 13:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:03:51.112 13:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.112 13:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:03:51.112 13:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.112 13:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:03:51.112 13:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:03:51.112 13:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:03:51.112 13:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:03:51.112 13:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:03:51.112 13:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:03:51.112 13:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:03:51.112 13:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:03:51.112 rmmod nvme_tcp 00:03:51.112 rmmod nvme_fabrics 00:03:51.112 rmmod nvme_keyring 00:03:51.112 13:47:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:03:51.112 13:47:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:03:51.112 13:47:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:03:51.112 13:47:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 619871 ']' 00:03:51.112 13:47:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 619871 00:03:51.112 13:47:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 619871 ']' 00:03:51.112 13:47:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 619871 00:03:51.112 13:47:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:03:51.112 13:47:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:51.112 13:47:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 619871 00:03:51.112 13:47:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:03:51.112 13:47:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:03:51.112 13:47:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 619871' 00:03:51.112 killing process with pid 619871 00:03:51.112 13:47:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 619871 00:03:51.112 13:47:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 619871 00:03:51.112 13:47:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:03:51.112 13:47:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:03:51.112 13:47:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:03:51.112 13:47:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:03:51.112 13:47:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:03:51.112 13:47:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:03:51.112 13:47:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:03:51.112 13:47:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:03:51.112 13:47:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:03:51.112 13:47:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:03:51.112 13:47:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:03:51.112 13:47:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:03:53.019 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:03:53.019 00:03:53.019 real 0m11.547s 00:03:53.019 user 0m13.380s 00:03:53.019 sys 0m4.976s 00:03:53.020 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:53.020 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:03:53.020 ************************************ 00:03:53.020 END TEST nvmf_abort 00:03:53.020 ************************************ 00:03:53.020 13:47:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:03:53.020 13:47:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:03:53.020 13:47:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:53.020 13:47:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:03:53.020 ************************************ 00:03:53.020 START TEST nvmf_ns_hotplug_stress 00:03:53.020 ************************************ 00:03:53.020 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:03:53.280 * Looking for test storage... 00:03:53.280 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:53.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.280 --rc genhtml_branch_coverage=1 00:03:53.280 --rc genhtml_function_coverage=1 00:03:53.280 --rc genhtml_legend=1 00:03:53.280 --rc geninfo_all_blocks=1 00:03:53.280 --rc geninfo_unexecuted_blocks=1 00:03:53.280 00:03:53.280 ' 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:53.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.280 --rc genhtml_branch_coverage=1 00:03:53.280 --rc genhtml_function_coverage=1 00:03:53.280 --rc genhtml_legend=1 00:03:53.280 --rc geninfo_all_blocks=1 00:03:53.280 --rc geninfo_unexecuted_blocks=1 00:03:53.280 00:03:53.280 ' 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:53.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.280 --rc genhtml_branch_coverage=1 00:03:53.280 --rc genhtml_function_coverage=1 00:03:53.280 --rc genhtml_legend=1 00:03:53.280 --rc geninfo_all_blocks=1 00:03:53.280 --rc geninfo_unexecuted_blocks=1 00:03:53.280 00:03:53.280 ' 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:53.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.280 --rc genhtml_branch_coverage=1 00:03:53.280 --rc genhtml_function_coverage=1 00:03:53.280 --rc genhtml_legend=1 00:03:53.280 --rc geninfo_all_blocks=1 00:03:53.280 --rc geninfo_unexecuted_blocks=1 00:03:53.280 00:03:53.280 ' 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.280 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:03:53.281 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:53.281 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:53.281 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:53.281 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:53.281 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:53.281 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:53.281 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:53.281 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:53.281 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:53.281 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:53.281 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:03:53.281 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:03:53.281 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:03:53.281 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:03:53.281 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:03:53.281 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:03:53.281 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:03:53.281 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:03:53.281 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:03:53.281 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:03:53.281 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:03:53.281 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:03:53.281 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:03:53.281 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:03:58.621 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:03:58.621 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:03:58.621 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:03:58.621 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:03:58.621 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:03:58.621 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:03:58.621 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:03:58.621 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:03:58.621 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:03:58.621 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:03:58.621 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:03:58.621 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:03:58.621 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:03:58.621 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:03:58.621 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:03:58.621 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:03:58.621 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:03:58.621 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:03:58.621 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:03:58.621 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:03:58.621 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:03:58.621 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:03:58.621 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:03:58.621 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:03:58.621 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:03:58.621 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:03:58.621 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:03:58.621 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:03:58.621 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:03:58.621 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:03:58.621 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:03:58.621 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:03:58.621 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:03:58.621 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:03:58.621 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:03:58.621 Found 0000:31:00.0 (0x8086 - 0x159b) 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:03:58.622 Found 0000:31:00.1 (0x8086 - 0x159b) 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:03:58.622 Found net devices under 0000:31:00.0: cvl_0_0 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:03:58.622 Found net devices under 0000:31:00.1: cvl_0_1 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:03:58.622 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:03:58.882 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:03:58.882 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:03:58.882 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:03:58.882 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:03:58.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:03:58.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.515 ms 00:03:58.882 00:03:58.882 --- 10.0.0.2 ping statistics --- 00:03:58.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:03:58.882 rtt min/avg/max/mdev = 0.515/0.515/0.515/0.000 ms 00:03:58.882 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:03:58.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:03:58.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:03:58.882 00:03:58.882 --- 10.0.0.1 ping statistics --- 00:03:58.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:03:58.882 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:03:58.882 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:03:58.882 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:03:58.882 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:03:58.882 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:03:58.882 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:03:58.882 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:03:58.883 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:03:58.883 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:03:58.883 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:03:58.883 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:03:58.883 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:03:58.883 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:58.883 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:03:58.883 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=625110 00:03:58.883 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 625110 00:03:58.883 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 625110 ']' 00:03:58.883 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:58.883 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:58.883 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:58.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:58.883 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:58.883 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:03:58.883 13:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:03:58.883 [2024-11-06 13:47:38.012703] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:03:58.883 [2024-11-06 13:47:38.012753] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:03:58.883 [2024-11-06 13:47:38.099381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:03:58.883 [2024-11-06 13:47:38.150738] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:03:58.883 [2024-11-06 13:47:38.150783] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:03:58.883 [2024-11-06 13:47:38.150797] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:58.883 [2024-11-06 13:47:38.150805] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:58.883 [2024-11-06 13:47:38.150811] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:03:58.883 [2024-11-06 13:47:38.152673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:03:58.883 [2024-11-06 13:47:38.152832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:58.883 [2024-11-06 13:47:38.152833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:03:59.819 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:59.819 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:03:59.819 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:03:59.819 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:59.819 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:03:59.819 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:03:59.819 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:03:59.819 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:03:59.819 [2024-11-06 13:47:38.962893] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:59.819 13:47:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:04:00.078 13:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:04:00.078 [2024-11-06 13:47:39.284046] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:04:00.078 13:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:04:00.337 13:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:04:00.337 Malloc0 00:04:00.599 13:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:04:00.599 Delay0 00:04:00.599 13:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:00.858 13:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:04:00.858 NULL1 00:04:00.858 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:04:01.117 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=625486 00:04:01.117 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:04:01.117 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:01.117 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:01.376 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:01.376 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:04:01.376 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:04:01.635 [2024-11-06 13:47:40.738853] bdev.c:5395:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:04:01.635 true 00:04:01.635 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:01.635 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:01.894 13:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:01.894 13:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:04:01.894 13:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:04:02.154 true 00:04:02.154 13:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:02.154 13:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:02.154 13:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:02.413 13:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:04:02.413 13:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:04:02.673 true 00:04:02.673 13:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:02.673 13:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:02.673 13:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:02.932 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:04:02.932 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:04:02.932 true 00:04:02.932 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:02.932 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:03.191 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:03.449 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:04:03.449 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:04:03.449 true 00:04:03.449 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:03.449 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:03.708 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:03.967 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:04:03.967 13:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:04:03.967 true 00:04:03.967 13:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:03.967 13:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:04.227 13:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:04.227 13:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:04:04.227 13:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:04:04.487 true 00:04:04.487 13:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:04.487 13:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:04.747 13:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:04.747 13:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:04:04.747 13:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:04:05.006 true 00:04:05.006 13:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:05.007 13:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:05.007 13:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:05.266 13:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:04:05.266 13:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:04:05.526 true 00:04:05.526 13:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:05.526 13:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:05.526 13:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:05.785 13:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:04:05.786 13:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:04:05.786 true 00:04:05.786 13:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:05.786 13:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:06.045 13:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:06.304 13:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:04:06.304 13:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:04:06.304 true 00:04:06.304 13:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:06.304 13:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:06.564 13:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:06.823 13:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:04:06.823 13:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:04:06.823 true 00:04:06.823 13:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:06.823 13:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:07.083 13:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:07.083 13:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:04:07.083 13:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:04:07.342 true 00:04:07.342 13:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:07.342 13:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:07.602 13:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:07.602 13:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:04:07.602 13:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:04:07.862 true 00:04:07.862 13:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:07.862 13:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:07.862 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:08.160 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:04:08.160 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:04:08.160 true 00:04:08.477 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:08.478 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:08.478 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:08.478 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:04:08.478 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:04:08.737 true 00:04:08.737 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:08.737 13:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:08.997 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:08.997 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:04:08.997 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:04:09.256 true 00:04:09.256 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:09.256 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:09.256 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:09.516 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:04:09.516 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:04:09.776 true 00:04:09.776 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:09.776 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:09.776 13:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:10.036 13:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:04:10.036 13:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:04:10.036 true 00:04:10.296 13:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:10.296 13:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:10.296 13:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:10.556 13:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:04:10.556 13:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:04:10.556 true 00:04:10.556 13:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:10.556 13:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:10.817 13:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:11.077 13:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:04:11.077 13:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:04:11.077 true 00:04:11.077 13:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:11.077 13:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:11.336 13:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:11.337 13:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:04:11.337 13:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:04:11.596 true 00:04:11.596 13:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:11.596 13:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:11.855 13:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:11.855 13:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:04:11.855 13:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:04:12.115 true 00:04:12.115 13:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:12.115 13:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:12.374 13:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:12.374 13:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:04:12.374 13:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:04:12.634 true 00:04:12.634 13:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:12.634 13:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:12.634 13:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:12.893 13:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:04:12.893 13:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:04:13.154 true 00:04:13.154 13:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:13.154 13:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:13.154 13:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:13.413 13:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:04:13.413 13:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:04:13.413 true 00:04:13.413 13:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:13.413 13:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:13.672 13:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:13.932 13:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:04:13.932 13:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:04:13.932 true 00:04:13.932 13:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:13.932 13:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:14.191 13:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:14.450 13:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:04:14.450 13:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:04:14.450 true 00:04:14.450 13:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:14.450 13:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:14.711 13:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:14.711 13:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:04:14.711 13:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:04:14.970 true 00:04:14.970 13:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:14.970 13:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:15.230 13:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:15.230 13:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:04:15.230 13:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:04:15.490 true 00:04:15.490 13:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:15.490 13:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:15.749 13:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:15.749 13:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:04:15.749 13:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:04:16.008 true 00:04:16.008 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:16.008 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:16.008 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:16.267 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:04:16.267 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:04:16.526 true 00:04:16.526 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:16.526 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:16.526 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:16.785 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:04:16.785 13:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:04:16.785 true 00:04:16.785 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:16.786 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:17.045 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:17.305 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:04:17.305 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:04:17.305 true 00:04:17.305 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:17.305 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:17.564 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:17.564 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:04:17.564 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:04:17.823 true 00:04:17.823 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:17.823 13:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:18.083 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:18.083 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:04:18.083 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:04:18.343 true 00:04:18.343 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:18.343 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:18.602 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:18.603 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:04:18.603 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:04:18.861 true 00:04:18.861 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:18.862 13:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:18.862 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:19.121 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:04:19.121 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:04:19.380 true 00:04:19.380 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:19.380 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:19.380 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:19.639 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:04:19.639 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:04:19.639 true 00:04:19.639 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:19.639 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:19.899 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:20.159 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:04:20.159 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:04:20.159 true 00:04:20.159 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:20.159 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:20.418 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:20.418 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:04:20.418 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:04:20.677 true 00:04:20.677 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:20.677 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:20.936 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:20.936 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:04:20.936 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:04:21.195 true 00:04:21.195 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:21.195 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:21.455 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:21.455 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:04:21.455 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:04:21.714 true 00:04:21.714 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:21.714 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:21.714 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:21.972 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:04:21.972 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:04:22.231 true 00:04:22.231 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:22.231 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:22.231 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:22.490 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:04:22.490 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:04:22.490 true 00:04:22.750 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:22.750 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:22.750 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:23.010 13:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:04:23.010 13:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:04:23.010 true 00:04:23.010 13:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:23.010 13:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:23.269 13:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:23.529 13:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:04:23.529 13:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:04:23.529 true 00:04:23.529 13:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:23.529 13:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:23.789 13:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:23.789 13:48:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:04:23.789 13:48:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:04:24.049 true 00:04:24.049 13:48:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:24.049 13:48:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:24.308 13:48:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:24.308 13:48:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:04:24.308 13:48:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:04:24.568 true 00:04:24.568 13:48:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:24.568 13:48:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:24.568 13:48:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:24.827 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:04:24.827 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:04:25.086 true 00:04:25.086 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:25.086 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:25.086 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:25.345 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:04:25.345 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:04:25.604 true 00:04:25.604 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:25.604 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:25.604 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:25.864 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:04:25.864 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:04:25.864 true 00:04:25.864 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:25.864 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:26.123 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:26.382 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:04:26.382 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:04:26.382 true 00:04:26.382 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:26.382 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:26.642 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:26.642 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:04:26.642 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:04:26.901 true 00:04:26.901 13:48:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:26.901 13:48:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:27.160 13:48:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:27.160 13:48:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:04:27.160 13:48:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:04:27.420 true 00:04:27.420 13:48:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:27.420 13:48:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:27.678 13:48:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:27.678 13:48:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:04:27.678 13:48:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:04:27.938 true 00:04:27.938 13:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:27.938 13:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:27.938 13:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:28.201 13:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:04:28.201 13:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:04:28.201 true 00:04:28.461 13:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:28.461 13:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:28.461 13:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:28.720 13:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1058 00:04:28.721 13:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:04:28.721 true 00:04:28.721 13:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:28.721 13:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:28.979 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:29.240 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1059 00:04:29.240 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:04:29.240 true 00:04:29.240 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:29.240 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:29.499 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:29.759 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1060 00:04:29.759 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1060 00:04:29.759 true 00:04:29.759 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:29.759 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:30.018 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:30.018 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1061 00:04:30.018 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1061 00:04:30.277 true 00:04:30.277 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:30.277 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:30.541 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:30.541 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1062 00:04:30.541 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1062 00:04:30.807 true 00:04:30.807 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:30.807 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:30.807 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:31.066 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1063 00:04:31.066 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1063 00:04:31.326 true 00:04:31.326 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:31.326 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:31.326 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:31.585 Initializing NVMe Controllers 00:04:31.585 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:04:31.585 Controller IO queue size 128, less than required. 00:04:31.585 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:04:31.585 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:04:31.585 Initialization complete. Launching workers. 00:04:31.585 ======================================================== 00:04:31.585 Latency(us) 00:04:31.585 Device Information : IOPS MiB/s Average min max 00:04:31.585 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30527.78 14.91 4192.98 1095.62 8272.66 00:04:31.585 ======================================================== 00:04:31.585 Total : 30527.78 14.91 4192.98 1095.62 8272.66 00:04:31.585 00:04:31.585 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1064 00:04:31.585 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1064 00:04:31.585 true 00:04:31.585 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 625486 00:04:31.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (625486) - No such process 00:04:31.585 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 625486 00:04:31.585 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:31.845 13:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:04:32.104 13:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:04:32.104 13:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:04:32.104 13:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:04:32.104 13:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:04:32.104 13:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:04:32.104 null0 00:04:32.104 13:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:04:32.104 13:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:04:32.104 13:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:04:32.362 null1 00:04:32.362 13:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:04:32.362 13:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:04:32.362 13:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:04:32.362 null2 00:04:32.362 13:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:04:32.362 13:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:04:32.362 13:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:04:32.621 null3 00:04:32.621 13:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:04:32.621 13:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:04:32.621 13:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:04:32.880 null4 00:04:32.880 13:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:04:32.880 13:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:04:32.880 13:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:04:32.880 null5 00:04:32.880 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:04:32.880 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:04:32.880 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:04:33.139 null6 00:04:33.139 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:04:33.139 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:04:33.139 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:04:33.139 null7 00:04:33.139 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:04:33.139 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:04:33.398 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:04:33.398 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:04:33.398 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:04:33.398 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:04:33.398 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:04:33.398 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:04:33.398 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:04:33.398 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:04:33.398 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:33.398 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:04:33.398 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:04:33.398 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:04:33.398 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:04:33.398 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:04:33.398 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:04:33.398 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 633573 633574 633576 633577 633580 633581 633583 633584 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:04:33.399 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:04:33.659 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:33.659 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:33.659 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:04:33.659 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:33.659 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:33.659 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:04:33.659 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:33.659 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:33.659 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:04:33.659 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:33.659 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:33.659 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:04:33.659 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:33.659 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:33.659 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:04:33.659 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:33.659 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:33.659 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:04:33.659 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:33.659 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:33.659 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:04:33.659 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:33.659 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:33.659 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:04:33.659 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:33.659 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:04:33.919 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:04:33.919 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:04:33.919 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:04:33.919 13:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:04:33.919 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:04:33.919 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:04:33.919 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:33.919 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:33.919 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:04:33.919 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:33.919 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:33.919 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:04:33.919 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:33.919 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:33.919 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:04:33.919 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:33.919 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:33.919 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:04:33.919 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:33.919 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:33.919 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:04:33.919 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:33.919 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:33.919 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:04:33.919 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:33.919 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:33.919 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:04:33.919 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:33.919 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:33.919 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:04:34.179 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:34.179 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:04:34.179 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:04:34.179 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:04:34.179 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:04:34.179 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:04:34.179 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:04:34.179 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:04:34.179 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:34.179 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:34.179 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:04:34.179 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:34.179 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:34.179 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:04:34.179 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:34.179 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:34.179 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:04:34.179 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:34.179 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:34.179 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:04:34.439 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:34.439 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:34.439 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:04:34.439 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:34.439 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:34.439 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:04:34.439 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:34.439 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:34.439 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:04:34.439 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:34.439 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:34.439 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:04:34.439 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:34.439 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:04:34.439 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:04:34.439 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:04:34.439 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:04:34.439 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:04:34.439 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:04:34.439 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:04:34.439 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:34.439 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:34.439 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:04:34.699 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:34.699 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:34.699 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:04:34.699 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:34.699 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:34.699 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:04:34.699 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:34.699 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:34.699 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:04:34.699 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:34.699 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:34.699 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:04:34.699 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:34.699 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:34.699 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:04:34.699 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:34.699 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:34.699 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:04:34.699 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:34.699 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:34.699 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:04:34.699 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:34.699 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:04:34.699 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:04:34.699 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:04:34.699 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:04:34.699 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:04:34.959 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:04:34.959 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:04:34.959 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:34.959 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:34.959 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:04:34.959 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:34.959 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:34.959 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:04:34.959 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:34.959 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:34.959 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:04:34.959 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:34.959 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:34.959 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:04:34.959 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:34.959 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:34.959 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:04:34.959 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:34.959 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:34.959 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:04:34.959 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:34.959 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:34.959 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:04:34.959 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:34.959 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:34.959 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:04:34.959 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:34.959 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:04:35.219 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:04:35.219 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:04:35.219 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:04:35.219 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:04:35.219 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:04:35.219 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:04:35.219 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:35.219 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:35.219 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:04:35.219 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:35.219 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:35.219 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:04:35.219 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:35.219 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:35.219 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:04:35.219 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:35.219 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:35.219 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:04:35.219 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:35.219 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:35.219 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:04:35.219 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:35.219 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:35.219 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:04:35.219 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:35.219 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:35.219 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:04:35.219 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:35.219 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:35.219 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:04:35.480 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:35.480 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:04:35.480 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:04:35.480 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:04:35.480 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:04:35.480 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:04:35.480 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:04:35.480 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:35.480 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:35.480 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:04:35.480 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:04:35.480 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:35.480 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:35.480 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:04:35.480 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:35.480 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:35.480 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:35.480 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:04:35.480 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:35.480 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:04:35.740 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:35.740 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:35.740 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:04:35.740 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:35.740 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:35.740 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:04:35.740 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:35.740 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:35.740 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:04:35.740 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:35.740 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:35.740 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:35.740 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:04:35.740 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:04:35.740 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:04:35.740 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:04:35.740 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:04:35.740 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:04:35.740 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:04:35.740 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:04:36.000 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:36.000 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:36.000 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:04:36.000 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:36.000 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:36.000 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:04:36.000 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:36.000 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:36.000 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:04:36.000 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:36.000 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:36.000 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:04:36.000 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:36.000 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:36.000 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:04:36.000 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:36.000 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:36.000 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:04:36.000 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:36.000 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:36.000 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:04:36.000 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:36.000 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:36.000 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:04:36.000 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:36.000 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:04:36.000 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:04:36.000 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:04:36.000 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:04:36.260 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:04:36.260 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:04:36.260 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:04:36.260 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:36.260 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:36.260 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:04:36.260 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:36.260 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:36.260 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:04:36.260 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:36.260 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:36.260 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:36.260 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:04:36.260 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:36.260 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:04:36.260 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:36.260 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:36.260 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:04:36.260 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:36.260 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:36.260 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:04:36.260 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:36.260 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:36.260 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:04:36.260 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:36.260 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:36.260 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:04:36.260 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:36.260 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:04:36.519 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:04:36.519 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:04:36.519 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:04:36.519 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:04:36.519 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:04:36.519 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:36.519 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:36.519 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:36.519 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:36.519 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:04:36.519 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:36.520 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:36.520 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:36.520 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:36.520 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:36.520 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:36.520 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:36.520 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:36.779 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:36.779 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:36.779 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:36.779 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:36.779 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:04:36.779 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:04:36.779 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:04:36.779 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:04:36.779 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:04:36.779 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:04:36.779 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:04:36.779 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:04:36.779 rmmod nvme_tcp 00:04:36.779 rmmod nvme_fabrics 00:04:36.779 rmmod nvme_keyring 00:04:36.779 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:04:36.779 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:04:36.779 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:04:36.779 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 625110 ']' 00:04:36.779 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 625110 00:04:36.779 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 625110 ']' 00:04:36.779 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 625110 00:04:36.779 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:04:36.779 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:36.779 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 625110 00:04:36.779 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:04:36.779 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:04:36.779 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 625110' 00:04:36.779 killing process with pid 625110 00:04:36.779 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 625110 00:04:36.779 13:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 625110 00:04:37.039 13:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:04:37.039 13:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:04:37.039 13:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:04:37.039 13:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:04:37.039 13:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:04:37.039 13:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:04:37.039 13:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:04:37.039 13:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:04:37.039 13:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:04:37.039 13:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:37.039 13:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:37.039 13:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:38.943 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:04:38.943 00:04:38.943 real 0m45.852s 00:04:38.943 user 3m13.284s 00:04:38.943 sys 0m14.858s 00:04:38.943 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:38.943 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:04:38.943 ************************************ 00:04:38.943 END TEST nvmf_ns_hotplug_stress 00:04:38.943 ************************************ 00:04:38.943 13:48:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:04:38.943 13:48:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:04:38.943 13:48:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:38.943 13:48:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:38.943 ************************************ 00:04:38.943 START TEST nvmf_delete_subsystem 00:04:38.943 ************************************ 00:04:38.943 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:04:39.204 * Looking for test storage... 00:04:39.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:39.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.204 --rc genhtml_branch_coverage=1 00:04:39.204 --rc genhtml_function_coverage=1 00:04:39.204 --rc genhtml_legend=1 00:04:39.204 --rc geninfo_all_blocks=1 00:04:39.204 --rc geninfo_unexecuted_blocks=1 00:04:39.204 00:04:39.204 ' 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:39.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.204 --rc genhtml_branch_coverage=1 00:04:39.204 --rc genhtml_function_coverage=1 00:04:39.204 --rc genhtml_legend=1 00:04:39.204 --rc geninfo_all_blocks=1 00:04:39.204 --rc geninfo_unexecuted_blocks=1 00:04:39.204 00:04:39.204 ' 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:39.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.204 --rc genhtml_branch_coverage=1 00:04:39.204 --rc genhtml_function_coverage=1 00:04:39.204 --rc genhtml_legend=1 00:04:39.204 --rc geninfo_all_blocks=1 00:04:39.204 --rc geninfo_unexecuted_blocks=1 00:04:39.204 00:04:39.204 ' 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:39.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.204 --rc genhtml_branch_coverage=1 00:04:39.204 --rc genhtml_function_coverage=1 00:04:39.204 --rc genhtml_legend=1 00:04:39.204 --rc geninfo_all_blocks=1 00:04:39.204 --rc geninfo_unexecuted_blocks=1 00:04:39.204 00:04:39.204 ' 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.204 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.205 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.205 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:04:39.205 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.205 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:04:39.205 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:39.205 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:39.205 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:39.205 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:39.205 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:39.205 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:39.205 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:39.205 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:39.205 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:39.205 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:39.205 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:04:39.205 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:39.205 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:39.205 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:39.205 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:39.205 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:39.205 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:39.205 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:39.205 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:39.205 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:39.205 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:39.205 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:04:39.205 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:04:44.509 Found 0000:31:00.0 (0x8086 - 0x159b) 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:04:44.509 Found 0000:31:00.1 (0x8086 - 0x159b) 00:04:44.509 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:04:44.510 Found net devices under 0000:31:00.0: cvl_0_0 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:04:44.510 Found net devices under 0000:31:00.1: cvl_0_1 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:04:44.510 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:04:44.770 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:04:44.770 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:04:44.770 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:04:44.770 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:04:44.770 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:04:44.770 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:04:44.770 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:04:44.770 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:04:44.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:04:44.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.556 ms 00:04:44.771 00:04:44.771 --- 10.0.0.2 ping statistics --- 00:04:44.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:44.771 rtt min/avg/max/mdev = 0.556/0.556/0.556/0.000 ms 00:04:44.771 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:04:44.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:04:44.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:04:44.771 00:04:44.771 --- 10.0.0.1 ping statistics --- 00:04:44.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:44.771 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:04:44.771 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:04:44.771 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:04:44.771 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:04:44.771 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:04:44.771 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:04:44.771 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:04:44.771 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:04:44.771 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:04:44.771 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:04:44.771 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:04:44.771 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:04:44.771 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:44.771 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:04:44.771 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:04:44.771 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=639083 00:04:44.771 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 639083 00:04:44.771 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 639083 ']' 00:04:44.771 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.771 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:44.771 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.771 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:44.771 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:04:45.030 [2024-11-06 13:48:24.074039] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:04:45.030 [2024-11-06 13:48:24.074090] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:04:45.030 [2024-11-06 13:48:24.159198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:45.030 [2024-11-06 13:48:24.204121] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:04:45.030 [2024-11-06 13:48:24.204173] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:04:45.030 [2024-11-06 13:48:24.204182] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:45.030 [2024-11-06 13:48:24.204189] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:45.030 [2024-11-06 13:48:24.204195] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:04:45.030 [2024-11-06 13:48:24.205839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.030 [2024-11-06 13:48:24.205842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.598 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:45.598 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:04:45.598 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:04:45.598 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:45.598 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:04:45.857 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:04:45.857 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:04:45.857 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.857 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:04:45.857 [2024-11-06 13:48:24.886277] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:45.857 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.857 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:04:45.857 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.857 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:04:45.857 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.857 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:04:45.857 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.857 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:04:45.857 [2024-11-06 13:48:24.902457] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:04:45.857 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.857 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:04:45.857 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.857 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:04:45.857 NULL1 00:04:45.857 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.857 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:04:45.857 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.857 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:04:45.857 Delay0 00:04:45.857 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.857 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:45.857 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.857 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:04:45.857 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.857 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=639116 00:04:45.857 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:04:45.857 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:04:45.857 [2024-11-06 13:48:24.987285] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:04:47.762 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:04:47.762 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.762 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Write completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 starting I/O failed: -6 00:04:48.021 Write completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 starting I/O failed: -6 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 starting I/O failed: -6 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 starting I/O failed: -6 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Write completed with error (sct=0, sc=8) 00:04:48.021 Write completed with error (sct=0, sc=8) 00:04:48.021 starting I/O failed: -6 00:04:48.021 Write completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Write completed with error (sct=0, sc=8) 00:04:48.021 starting I/O failed: -6 00:04:48.021 Write completed with error (sct=0, sc=8) 00:04:48.021 Write completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Write completed with error (sct=0, sc=8) 00:04:48.021 starting I/O failed: -6 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Write completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 starting I/O failed: -6 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Write completed with error (sct=0, sc=8) 00:04:48.021 starting I/O failed: -6 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Write completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Write completed with error (sct=0, sc=8) 00:04:48.021 starting I/O failed: -6 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Write completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 starting I/O failed: -6 00:04:48.021 [2024-11-06 13:48:27.114836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f270800d4b0 is same with the state(6) to be set 00:04:48.021 Write completed with error (sct=0, sc=8) 00:04:48.021 Write completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Write completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Write completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Write completed with error (sct=0, sc=8) 00:04:48.021 Write completed with error (sct=0, sc=8) 00:04:48.021 Write completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Write completed with error (sct=0, sc=8) 00:04:48.021 Read completed with error (sct=0, sc=8) 00:04:48.021 Write completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 starting I/O failed: -6 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 starting I/O failed: -6 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 starting I/O failed: -6 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 starting I/O failed: -6 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 starting I/O failed: -6 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 starting I/O failed: -6 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 starting I/O failed: -6 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 starting I/O failed: -6 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 starting I/O failed: -6 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 starting I/O failed: -6 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 starting I/O failed: -6 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 [2024-11-06 13:48:27.115886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e77f00 is same with the state(6) to be set 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Read completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:48.022 Write completed with error (sct=0, sc=8) 00:04:49.045 [2024-11-06 13:48:28.089426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e795e0 is same with the state(6) to be set 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Write completed with error (sct=0, sc=8) 00:04:49.045 Write completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Write completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Write completed with error (sct=0, sc=8) 00:04:49.045 Write completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Write completed with error (sct=0, sc=8) 00:04:49.045 Write completed with error (sct=0, sc=8) 00:04:49.045 [2024-11-06 13:48:28.116997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e780e0 is same with the state(6) to be set 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Write completed with error (sct=0, sc=8) 00:04:49.045 Write completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Write completed with error (sct=0, sc=8) 00:04:49.045 Write completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Write completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Write completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Write completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Write completed with error (sct=0, sc=8) 00:04:49.045 Write completed with error (sct=0, sc=8) 00:04:49.045 Write completed with error (sct=0, sc=8) 00:04:49.045 [2024-11-06 13:48:28.118129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f270800d7e0 is same with the state(6) to be set 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Write completed with error (sct=0, sc=8) 00:04:49.045 Write completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Write completed with error (sct=0, sc=8) 00:04:49.045 Write completed with error (sct=0, sc=8) 00:04:49.045 Write completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Write completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 [2024-11-06 13:48:28.118378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f270800d020 is same with the state(6) to be set 00:04:49.045 Write completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Write completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Write completed with error (sct=0, sc=8) 00:04:49.045 Write completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Write completed with error (sct=0, sc=8) 00:04:49.045 Write completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Write completed with error (sct=0, sc=8) 00:04:49.045 Write completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Write completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 Read completed with error (sct=0, sc=8) 00:04:49.045 [2024-11-06 13:48:28.118756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e784a0 is same with the state(6) to be set 00:04:49.045 Initializing NVMe Controllers 00:04:49.045 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:04:49.045 Controller IO queue size 128, less than required. 00:04:49.045 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:04:49.046 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:04:49.046 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:04:49.046 Initialization complete. Launching workers. 00:04:49.046 ======================================================== 00:04:49.046 Latency(us) 00:04:49.046 Device Information : IOPS MiB/s Average min max 00:04:49.046 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.97 0.08 908448.88 223.87 2001146.26 00:04:49.046 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 165.00 0.08 905806.68 224.73 1042082.56 00:04:49.046 ======================================================== 00:04:49.046 Total : 335.97 0.16 907151.23 223.87 2001146.26 00:04:49.046 00:04:49.046 [2024-11-06 13:48:28.119088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e795e0 (9): Bad file descriptor 00:04:49.046 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:04:49.046 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.046 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:04:49.046 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 639116 00:04:49.046 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:04:49.361 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:04:49.361 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 639116 00:04:49.361 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (639116) - No such process 00:04:49.361 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 639116 00:04:49.361 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:04:49.361 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 639116 00:04:49.361 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:04:49.361 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:49.361 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:04:49.361 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:49.361 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 639116 00:04:49.361 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:04:49.361 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:49.361 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:49.361 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:49.362 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:04:49.362 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.362 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:04:49.362 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.362 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:04:49.362 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.362 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:04:49.362 [2024-11-06 13:48:28.643006] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:04:49.621 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.621 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:49.621 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.621 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:04:49.621 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.621 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=640117 00:04:49.621 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:04:49.621 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 640117 00:04:49.621 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:04:49.621 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:04:49.621 [2024-11-06 13:48:28.697882] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:04:49.881 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:04:49.881 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 640117 00:04:49.881 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:04:50.448 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:04:50.448 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 640117 00:04:50.448 13:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:04:51.016 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:04:51.016 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 640117 00:04:51.016 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:04:51.585 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:04:51.585 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 640117 00:04:51.585 13:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:04:52.152 13:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:04:52.152 13:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 640117 00:04:52.152 13:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:04:52.411 13:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:04:52.411 13:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 640117 00:04:52.411 13:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:04:52.671 Initializing NVMe Controllers 00:04:52.671 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:04:52.671 Controller IO queue size 128, less than required. 00:04:52.671 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:04:52.671 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:04:52.671 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:04:52.671 Initialization complete. Launching workers. 00:04:52.671 ======================================================== 00:04:52.671 Latency(us) 00:04:52.671 Device Information : IOPS MiB/s Average min max 00:04:52.671 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003036.23 1000184.28 1040996.74 00:04:52.671 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003362.28 1000200.19 1007911.07 00:04:52.671 ======================================================== 00:04:52.671 Total : 256.00 0.12 1003199.26 1000184.28 1040996.74 00:04:52.671 00:04:52.930 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:04:52.930 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 640117 00:04:52.930 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (640117) - No such process 00:04:52.930 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 640117 00:04:52.930 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:52.930 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:04:52.930 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:04:52.930 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:04:52.930 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:04:52.930 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:04:52.930 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:04:52.930 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:04:52.930 rmmod nvme_tcp 00:04:52.930 rmmod nvme_fabrics 00:04:53.189 rmmod nvme_keyring 00:04:53.189 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:04:53.189 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:04:53.189 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:04:53.189 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 639083 ']' 00:04:53.189 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 639083 00:04:53.189 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 639083 ']' 00:04:53.189 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 639083 00:04:53.189 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:04:53.189 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:53.189 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 639083 00:04:53.189 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:53.189 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:53.189 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 639083' 00:04:53.189 killing process with pid 639083 00:04:53.189 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 639083 00:04:53.189 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 639083 00:04:53.189 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:04:53.189 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:04:53.189 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:04:53.189 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:04:53.189 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:04:53.189 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:04:53.189 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:04:53.189 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:04:53.189 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:04:53.189 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:53.189 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:53.189 13:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:04:55.727 00:04:55.727 real 0m16.244s 00:04:55.727 user 0m29.838s 00:04:55.727 sys 0m5.301s 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:04:55.727 ************************************ 00:04:55.727 END TEST nvmf_delete_subsystem 00:04:55.727 ************************************ 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:55.727 ************************************ 00:04:55.727 START TEST nvmf_host_management 00:04:55.727 ************************************ 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:04:55.727 * Looking for test storage... 00:04:55.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:55.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.727 --rc genhtml_branch_coverage=1 00:04:55.727 --rc genhtml_function_coverage=1 00:04:55.727 --rc genhtml_legend=1 00:04:55.727 --rc geninfo_all_blocks=1 00:04:55.727 --rc geninfo_unexecuted_blocks=1 00:04:55.727 00:04:55.727 ' 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:55.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.727 --rc genhtml_branch_coverage=1 00:04:55.727 --rc genhtml_function_coverage=1 00:04:55.727 --rc genhtml_legend=1 00:04:55.727 --rc geninfo_all_blocks=1 00:04:55.727 --rc geninfo_unexecuted_blocks=1 00:04:55.727 00:04:55.727 ' 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:55.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.727 --rc genhtml_branch_coverage=1 00:04:55.727 --rc genhtml_function_coverage=1 00:04:55.727 --rc genhtml_legend=1 00:04:55.727 --rc geninfo_all_blocks=1 00:04:55.727 --rc geninfo_unexecuted_blocks=1 00:04:55.727 00:04:55.727 ' 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:55.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.727 --rc genhtml_branch_coverage=1 00:04:55.727 --rc genhtml_function_coverage=1 00:04:55.727 --rc genhtml_legend=1 00:04:55.727 --rc geninfo_all_blocks=1 00:04:55.727 --rc geninfo_unexecuted_blocks=1 00:04:55.727 00:04:55.727 ' 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:55.727 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:55.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:04:55.728 13:48:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:01.008 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:01.008 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:05:01.008 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:01.008 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:01.008 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:01.008 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:01.008 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:01.008 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:05:01.008 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:01.008 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:05:01.008 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:05:01.008 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:05:01.008 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:05:01.008 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:05:01.008 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:05:01.008 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:01.008 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:01.008 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:01.008 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:01.008 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:01.008 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:05:01.009 Found 0000:31:00.0 (0x8086 - 0x159b) 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:05:01.009 Found 0000:31:00.1 (0x8086 - 0x159b) 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:05:01.009 Found net devices under 0000:31:00.0: cvl_0_0 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:05:01.009 Found net devices under 0000:31:00.1: cvl_0_1 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:01.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:01.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:05:01.009 00:05:01.009 --- 10.0.0.2 ping statistics --- 00:05:01.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:01.009 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:01.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:01.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:05:01.009 00:05:01.009 --- 10.0.0.1 ping statistics --- 00:05:01.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:01.009 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:01.009 13:48:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:01.009 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:05:01.009 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:05:01.009 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:05:01.009 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:01.009 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:01.009 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:01.009 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=645154 00:05:01.009 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 645154 00:05:01.010 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 645154 ']' 00:05:01.010 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.010 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:01.010 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.010 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:01.010 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:01.010 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:05:01.010 [2024-11-06 13:48:40.054582] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:05:01.010 [2024-11-06 13:48:40.054631] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:01.010 [2024-11-06 13:48:40.126716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:01.010 [2024-11-06 13:48:40.157888] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:01.010 [2024-11-06 13:48:40.157918] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:01.010 [2024-11-06 13:48:40.157925] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:01.010 [2024-11-06 13:48:40.157930] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:01.010 [2024-11-06 13:48:40.157935] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:01.010 [2024-11-06 13:48:40.159413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:01.010 [2024-11-06 13:48:40.159669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:01.010 [2024-11-06 13:48:40.159803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.010 [2024-11-06 13:48:40.159805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:01.577 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:01.577 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:05:01.577 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:01.577 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:01.577 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:01.577 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:01.577 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:01.577 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.577 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:01.577 [2024-11-06 13:48:40.857433] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:01.837 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.837 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:05:01.837 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:01.837 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:01.837 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:05:01.837 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:05:01.837 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:05:01.837 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.837 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:01.837 Malloc0 00:05:01.837 [2024-11-06 13:48:40.923346] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:01.837 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.837 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:05:01.837 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:01.837 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:01.837 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=645521 00:05:01.837 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 645521 /var/tmp/bdevperf.sock 00:05:01.837 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 645521 ']' 00:05:01.837 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:05:01.837 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:01.837 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:05:01.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:05:01.837 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:01.837 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:01.837 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:05:01.837 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:05:01.837 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:05:01.837 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:05:01.837 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:05:01.837 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:05:01.837 { 00:05:01.837 "params": { 00:05:01.837 "name": "Nvme$subsystem", 00:05:01.837 "trtype": "$TEST_TRANSPORT", 00:05:01.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:05:01.837 "adrfam": "ipv4", 00:05:01.837 "trsvcid": "$NVMF_PORT", 00:05:01.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:05:01.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:05:01.837 "hdgst": ${hdgst:-false}, 00:05:01.837 "ddgst": ${ddgst:-false} 00:05:01.837 }, 00:05:01.837 "method": "bdev_nvme_attach_controller" 00:05:01.837 } 00:05:01.837 EOF 00:05:01.837 )") 00:05:01.837 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:05:01.837 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:05:01.837 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:05:01.837 13:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:05:01.837 "params": { 00:05:01.837 "name": "Nvme0", 00:05:01.837 "trtype": "tcp", 00:05:01.837 "traddr": "10.0.0.2", 00:05:01.837 "adrfam": "ipv4", 00:05:01.837 "trsvcid": "4420", 00:05:01.837 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:05:01.837 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:05:01.837 "hdgst": false, 00:05:01.837 "ddgst": false 00:05:01.837 }, 00:05:01.837 "method": "bdev_nvme_attach_controller" 00:05:01.837 }' 00:05:01.837 [2024-11-06 13:48:40.996871] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:05:01.837 [2024-11-06 13:48:40.996922] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid645521 ] 00:05:01.837 [2024-11-06 13:48:41.075302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.837 [2024-11-06 13:48:41.111695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.096 Running I/O for 10 seconds... 00:05:02.665 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:02.665 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:05:02.665 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:05:02.665 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.665 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:02.665 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.665 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:02.665 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:05:02.665 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:05:02.665 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:05:02.665 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:05:02.665 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:05:02.665 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:05:02.665 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:05:02.665 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:05:02.665 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:05:02.665 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.665 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:02.665 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.665 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=713 00:05:02.665 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 713 -ge 100 ']' 00:05:02.665 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:05:02.665 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:05:02.665 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:05:02.665 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:05:02.665 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.665 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:02.665 [2024-11-06 13:48:41.818024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1a1b0 is same with the state(6) to be set 00:05:02.665 [2024-11-06 13:48:41.818076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1a1b0 is same with the state(6) to be set 00:05:02.665 [2024-11-06 13:48:41.818082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1a1b0 is same with the state(6) to be set 00:05:02.665 [2024-11-06 13:48:41.818087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1a1b0 is same with the state(6) to be set 00:05:02.665 [2024-11-06 13:48:41.818092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1a1b0 is same with the state(6) to be set 00:05:02.665 [2024-11-06 13:48:41.818097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1a1b0 is same with the state(6) to be set 00:05:02.665 [2024-11-06 13:48:41.818102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1a1b0 is same with the state(6) to be set 00:05:02.665 [2024-11-06 13:48:41.818106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1a1b0 is same with the state(6) to be set 00:05:02.665 [2024-11-06 13:48:41.818111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1a1b0 is same with the state(6) to be set 00:05:02.665 [2024-11-06 13:48:41.818115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1a1b0 is same with the state(6) to be set 00:05:02.665 [2024-11-06 13:48:41.819640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:05:02.665 [2024-11-06 13:48:41.819677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.665 [2024-11-06 13:48:41.819688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:05:02.665 [2024-11-06 13:48:41.819696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.665 [2024-11-06 13:48:41.819704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:05:02.665 [2024-11-06 13:48:41.819712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.665 [2024-11-06 13:48:41.819726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:05:02.665 [2024-11-06 13:48:41.819734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.665 [2024-11-06 13:48:41.819741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa95b00 is same with the state(6) to be set 00:05:02.665 [2024-11-06 13:48:41.820168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.665 [2024-11-06 13:48:41.820189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.665 [2024-11-06 13:48:41.820204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.665 [2024-11-06 13:48:41.820212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.665 [2024-11-06 13:48:41.820222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.665 [2024-11-06 13:48:41.820231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.665 [2024-11-06 13:48:41.820242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.665 [2024-11-06 13:48:41.820256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.665 [2024-11-06 13:48:41.820268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.665 [2024-11-06 13:48:41.820277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.665 [2024-11-06 13:48:41.820288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.665 [2024-11-06 13:48:41.820297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.665 [2024-11-06 13:48:41.820307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.665 [2024-11-06 13:48:41.820315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.665 [2024-11-06 13:48:41.820325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.665 [2024-11-06 13:48:41.820333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.820350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.820367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.820384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.820405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.820422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.820440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.820457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.820474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.820491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.820509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.820526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.820544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.820562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.820581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.820601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.820620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.820639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.820656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.820674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.820692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.820710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.820728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.820747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.820763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.820781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.820798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.820815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.820832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.820851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.820868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.820885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.820902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.820919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.820936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.820954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.820971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.820988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.820998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.821006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.666 [2024-11-06 13:48:41.821015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.666 [2024-11-06 13:48:41.821023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.667 [2024-11-06 13:48:41.821032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.667 [2024-11-06 13:48:41.821039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.667 [2024-11-06 13:48:41.821049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.667 [2024-11-06 13:48:41.821058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.667 [2024-11-06 13:48:41.821068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.667 [2024-11-06 13:48:41.821076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.667 [2024-11-06 13:48:41.821085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.667 [2024-11-06 13:48:41.821092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.667 [2024-11-06 13:48:41.821107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.667 [2024-11-06 13:48:41.821114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.667 [2024-11-06 13:48:41.821124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.667 [2024-11-06 13:48:41.821131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.667 [2024-11-06 13:48:41.821141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.667 [2024-11-06 13:48:41.821148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.667 [2024-11-06 13:48:41.821158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.667 [2024-11-06 13:48:41.821165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.667 [2024-11-06 13:48:41.821175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.667 [2024-11-06 13:48:41.821182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.667 [2024-11-06 13:48:41.821192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.667 [2024-11-06 13:48:41.821199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.667 [2024-11-06 13:48:41.821209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.667 [2024-11-06 13:48:41.821218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.667 [2024-11-06 13:48:41.821227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.667 [2024-11-06 13:48:41.821235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.667 [2024-11-06 13:48:41.821248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.667 [2024-11-06 13:48:41.821256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.667 [2024-11-06 13:48:41.821265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.667 [2024-11-06 13:48:41.821273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.667 [2024-11-06 13:48:41.821284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.667 [2024-11-06 13:48:41.821292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.667 [2024-11-06 13:48:41.821302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.667 [2024-11-06 13:48:41.821309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.667 [2024-11-06 13:48:41.821318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:02.667 [2024-11-06 13:48:41.821326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:02.667 [2024-11-06 13:48:41.822553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:05:02.667 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.667 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:05:02.667 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.667 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:02.667 task offset: 99072 on job bdev=Nvme0n1 fails 00:05:02.667 00:05:02.667 Latency(us) 00:05:02.667 [2024-11-06T12:48:41.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:02.667 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:05:02.667 Job: Nvme0n1 ended in about 0.52 seconds with error 00:05:02.667 Verification LBA range: start 0x0 length 0x400 00:05:02.667 Nvme0n1 : 0.52 1475.62 92.23 122.97 0.00 39005.40 2088.96 34297.17 00:05:02.667 [2024-11-06T12:48:41.951Z] =================================================================================================================== 00:05:02.667 [2024-11-06T12:48:41.951Z] Total : 1475.62 92.23 122.97 0.00 39005.40 2088.96 34297.17 00:05:02.667 [2024-11-06 13:48:41.824545] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:02.667 [2024-11-06 13:48:41.824565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa95b00 (9): Bad file descriptor 00:05:02.667 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.667 13:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:05:02.667 [2024-11-06 13:48:41.872451] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:05:03.605 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 645521 00:05:03.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (645521) - No such process 00:05:03.605 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:05:03.605 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:05:03.605 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:05:03.605 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:05:03.605 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:05:03.605 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:05:03.605 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:05:03.605 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:05:03.605 { 00:05:03.605 "params": { 00:05:03.605 "name": "Nvme$subsystem", 00:05:03.605 "trtype": "$TEST_TRANSPORT", 00:05:03.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:05:03.605 "adrfam": "ipv4", 00:05:03.605 "trsvcid": "$NVMF_PORT", 00:05:03.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:05:03.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:05:03.605 "hdgst": ${hdgst:-false}, 00:05:03.605 "ddgst": ${ddgst:-false} 00:05:03.605 }, 00:05:03.605 "method": "bdev_nvme_attach_controller" 00:05:03.605 } 00:05:03.605 EOF 00:05:03.605 )") 00:05:03.605 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:05:03.605 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:05:03.605 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:05:03.605 13:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:05:03.605 "params": { 00:05:03.605 "name": "Nvme0", 00:05:03.605 "trtype": "tcp", 00:05:03.605 "traddr": "10.0.0.2", 00:05:03.605 "adrfam": "ipv4", 00:05:03.605 "trsvcid": "4420", 00:05:03.605 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:05:03.605 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:05:03.605 "hdgst": false, 00:05:03.605 "ddgst": false 00:05:03.605 }, 00:05:03.605 "method": "bdev_nvme_attach_controller" 00:05:03.605 }' 00:05:03.605 [2024-11-06 13:48:42.867430] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:05:03.605 [2024-11-06 13:48:42.867483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid645881 ] 00:05:03.864 [2024-11-06 13:48:42.945798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.864 [2024-11-06 13:48:42.983943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.123 Running I/O for 1 seconds... 00:05:05.060 1791.00 IOPS, 111.94 MiB/s 00:05:05.060 Latency(us) 00:05:05.060 [2024-11-06T12:48:44.344Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:05.060 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:05:05.060 Verification LBA range: start 0x0 length 0x400 00:05:05.060 Nvme0n1 : 1.03 1797.42 112.34 0.00 0.00 34958.75 5734.40 34515.63 00:05:05.060 [2024-11-06T12:48:44.344Z] =================================================================================================================== 00:05:05.060 [2024-11-06T12:48:44.344Z] Total : 1797.42 112.34 0.00 0.00 34958.75 5734.40 34515.63 00:05:05.319 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:05:05.319 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:05:05.319 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:05:05.319 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:05:05.319 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:05:05.319 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:05.319 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:05:05.319 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:05.319 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:05:05.319 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:05.319 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:05.319 rmmod nvme_tcp 00:05:05.319 rmmod nvme_fabrics 00:05:05.319 rmmod nvme_keyring 00:05:05.319 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:05.319 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:05:05.319 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:05:05.319 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 645154 ']' 00:05:05.319 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 645154 00:05:05.319 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 645154 ']' 00:05:05.319 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 645154 00:05:05.319 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:05:05.319 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:05.319 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 645154 00:05:05.319 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:05:05.319 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:05:05.319 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 645154' 00:05:05.319 killing process with pid 645154 00:05:05.319 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 645154 00:05:05.319 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 645154 00:05:05.319 [2024-11-06 13:48:44.564909] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:05:05.319 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:05.319 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:05.319 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:05.319 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:05:05.319 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:05:05.319 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:05.319 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:05:05.319 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:05.319 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:05.319 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:05.319 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:05.320 13:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:07.854 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:07.854 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:05:07.854 00:05:07.854 real 0m12.153s 00:05:07.854 user 0m21.542s 00:05:07.855 sys 0m4.931s 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:07.855 ************************************ 00:05:07.855 END TEST nvmf_host_management 00:05:07.855 ************************************ 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:07.855 ************************************ 00:05:07.855 START TEST nvmf_lvol 00:05:07.855 ************************************ 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:05:07.855 * Looking for test storage... 00:05:07.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:07.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.855 --rc genhtml_branch_coverage=1 00:05:07.855 --rc genhtml_function_coverage=1 00:05:07.855 --rc genhtml_legend=1 00:05:07.855 --rc geninfo_all_blocks=1 00:05:07.855 --rc geninfo_unexecuted_blocks=1 00:05:07.855 00:05:07.855 ' 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:07.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.855 --rc genhtml_branch_coverage=1 00:05:07.855 --rc genhtml_function_coverage=1 00:05:07.855 --rc genhtml_legend=1 00:05:07.855 --rc geninfo_all_blocks=1 00:05:07.855 --rc geninfo_unexecuted_blocks=1 00:05:07.855 00:05:07.855 ' 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:07.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.855 --rc genhtml_branch_coverage=1 00:05:07.855 --rc genhtml_function_coverage=1 00:05:07.855 --rc genhtml_legend=1 00:05:07.855 --rc geninfo_all_blocks=1 00:05:07.855 --rc geninfo_unexecuted_blocks=1 00:05:07.855 00:05:07.855 ' 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:07.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.855 --rc genhtml_branch_coverage=1 00:05:07.855 --rc genhtml_function_coverage=1 00:05:07.855 --rc genhtml_legend=1 00:05:07.855 --rc geninfo_all_blocks=1 00:05:07.855 --rc geninfo_unexecuted_blocks=1 00:05:07.855 00:05:07.855 ' 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:07.855 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:07.856 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.856 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.856 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.856 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:05:07.856 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.856 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:05:07.856 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:07.856 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:07.856 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:07.856 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:07.856 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:07.856 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:07.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:07.856 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:07.856 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:07.856 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:07.856 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:07.856 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:05:07.856 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:05:07.856 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:05:07.856 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:07.856 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:05:07.856 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:07.856 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:07.856 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:07.856 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:07.856 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:07.856 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:07.856 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:07.856 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:07.856 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:07.856 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:07.856 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:05:07.856 13:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:05:13.135 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:13.135 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:05:13.135 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:13.135 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:13.135 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:13.135 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:13.135 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:13.135 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:05:13.135 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:13.135 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:05:13.135 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:05:13.135 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:05:13.135 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:05:13.135 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:05:13.135 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:05:13.135 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:13.135 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:13.135 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:13.135 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:13.135 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:13.135 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:13.135 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:13.135 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:13.135 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:13.135 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:13.135 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:13.135 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:13.135 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:13.135 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:13.135 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:05:13.136 Found 0000:31:00.0 (0x8086 - 0x159b) 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:05:13.136 Found 0000:31:00.1 (0x8086 - 0x159b) 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:05:13.136 Found net devices under 0000:31:00.0: cvl_0_0 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:05:13.136 Found net devices under 0000:31:00.1: cvl_0_1 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:13.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:13.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:05:13.136 00:05:13.136 --- 10.0.0.2 ping statistics --- 00:05:13.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:13.136 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:13.136 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:13.136 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:05:13.136 00:05:13.136 --- 10.0.0.1 ping statistics --- 00:05:13.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:13.136 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=650579 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 650579 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 650579 ']' 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:05:13.136 13:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:05:13.396 [2024-11-06 13:48:52.449182] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:05:13.396 [2024-11-06 13:48:52.449256] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:13.396 [2024-11-06 13:48:52.541160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:13.396 [2024-11-06 13:48:52.594074] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:13.396 [2024-11-06 13:48:52.594129] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:13.396 [2024-11-06 13:48:52.594138] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:13.396 [2024-11-06 13:48:52.594145] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:13.396 [2024-11-06 13:48:52.594151] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:13.396 [2024-11-06 13:48:52.596036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.396 [2024-11-06 13:48:52.596204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.396 [2024-11-06 13:48:52.596204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:14.334 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:14.334 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:05:14.334 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:14.334 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:14.334 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:05:14.334 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:14.334 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:14.334 [2024-11-06 13:48:53.439414] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:14.334 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:05:14.594 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:05:14.594 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:05:14.594 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:05:14.594 13:48:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:05:14.985 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:05:14.985 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ff8779ae-2592-4c6d-b9c6-1df497f1b5ff 00:05:14.985 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ff8779ae-2592-4c6d-b9c6-1df497f1b5ff lvol 20 00:05:15.243 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=1a429855-a5c6-4c56-a8a5-72695ea75227 00:05:15.243 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:15.502 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1a429855-a5c6-4c56-a8a5-72695ea75227 00:05:15.502 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:15.761 [2024-11-06 13:48:54.838005] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:15.761 13:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:15.761 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=651272 00:05:15.761 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:05:15.761 13:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:05:17.141 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 1a429855-a5c6-4c56-a8a5-72695ea75227 MY_SNAPSHOT 00:05:17.141 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d73afe97-2bb3-4d33-bd0f-4e452488f1f6 00:05:17.141 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 1a429855-a5c6-4c56-a8a5-72695ea75227 30 00:05:17.141 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone d73afe97-2bb3-4d33-bd0f-4e452488f1f6 MY_CLONE 00:05:17.401 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=72aad390-097e-4f10-9685-d4038edd754b 00:05:17.401 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 72aad390-097e-4f10-9685-d4038edd754b 00:05:17.660 13:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 651272 00:05:27.645 Initializing NVMe Controllers 00:05:27.646 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:27.646 Controller IO queue size 128, less than required. 00:05:27.646 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:27.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:05:27.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:05:27.646 Initialization complete. Launching workers. 00:05:27.646 ======================================================== 00:05:27.646 Latency(us) 00:05:27.646 Device Information : IOPS MiB/s Average min max 00:05:27.646 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16739.58 65.39 7647.61 1420.26 51189.99 00:05:27.646 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17245.37 67.36 7423.30 341.61 40392.79 00:05:27.646 ======================================================== 00:05:27.646 Total : 33984.95 132.75 7533.79 341.61 51189.99 00:05:27.646 00:05:27.646 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:27.646 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1a429855-a5c6-4c56-a8a5-72695ea75227 00:05:27.646 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ff8779ae-2592-4c6d-b9c6-1df497f1b5ff 00:05:27.646 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:05:27.646 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:05:27.646 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:05:27.646 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:27.646 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:05:27.646 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:27.646 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:05:27.646 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:27.646 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:27.646 rmmod nvme_tcp 00:05:27.646 rmmod nvme_fabrics 00:05:27.646 rmmod nvme_keyring 00:05:27.646 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:27.646 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:05:27.646 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:05:27.646 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 650579 ']' 00:05:27.646 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 650579 00:05:27.646 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 650579 ']' 00:05:27.646 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 650579 00:05:27.646 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:05:27.646 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:27.646 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 650579 00:05:27.646 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:27.646 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:27.646 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 650579' 00:05:27.646 killing process with pid 650579 00:05:27.646 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 650579 00:05:27.646 13:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 650579 00:05:27.646 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:27.646 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:27.646 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:27.646 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:05:27.646 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:05:27.646 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:05:27.646 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:27.646 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:27.646 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:27.646 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:27.646 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:27.646 13:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:29.026 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:29.026 00:05:29.026 real 0m21.445s 00:05:29.026 user 1m2.747s 00:05:29.026 sys 0m6.851s 00:05:29.026 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:29.026 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:05:29.026 ************************************ 00:05:29.026 END TEST nvmf_lvol 00:05:29.026 ************************************ 00:05:29.026 13:49:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:05:29.026 13:49:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:29.026 13:49:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:29.026 13:49:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:29.026 ************************************ 00:05:29.026 START TEST nvmf_lvs_grow 00:05:29.026 ************************************ 00:05:29.026 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:05:29.026 * Looking for test storage... 00:05:29.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:29.026 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:29.026 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:05:29.026 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:29.026 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:29.026 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.026 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.026 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.026 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.026 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.026 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.026 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.026 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.026 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.026 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.026 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.026 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:05:29.026 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:05:29.026 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.026 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.026 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:05:29.026 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:05:29.026 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.026 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:05:29.026 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.026 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:05:29.026 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:05:29.026 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.026 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:05:29.026 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.027 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.027 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.027 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:05:29.027 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.027 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:29.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.027 --rc genhtml_branch_coverage=1 00:05:29.027 --rc genhtml_function_coverage=1 00:05:29.027 --rc genhtml_legend=1 00:05:29.027 --rc geninfo_all_blocks=1 00:05:29.027 --rc geninfo_unexecuted_blocks=1 00:05:29.027 00:05:29.027 ' 00:05:29.027 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:29.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.027 --rc genhtml_branch_coverage=1 00:05:29.027 --rc genhtml_function_coverage=1 00:05:29.027 --rc genhtml_legend=1 00:05:29.027 --rc geninfo_all_blocks=1 00:05:29.027 --rc geninfo_unexecuted_blocks=1 00:05:29.027 00:05:29.027 ' 00:05:29.027 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:29.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.027 --rc genhtml_branch_coverage=1 00:05:29.027 --rc genhtml_function_coverage=1 00:05:29.027 --rc genhtml_legend=1 00:05:29.027 --rc geninfo_all_blocks=1 00:05:29.027 --rc geninfo_unexecuted_blocks=1 00:05:29.027 00:05:29.027 ' 00:05:29.027 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:29.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.027 --rc genhtml_branch_coverage=1 00:05:29.027 --rc genhtml_function_coverage=1 00:05:29.027 --rc genhtml_legend=1 00:05:29.027 --rc geninfo_all_blocks=1 00:05:29.027 --rc geninfo_unexecuted_blocks=1 00:05:29.027 00:05:29.027 ' 00:05:29.027 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:29.027 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:05:29.027 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:29.027 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:29.027 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:29.027 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:29.027 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:29.027 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:29.027 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:29.027 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:29.027 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:29.027 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:29.027 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:05:29.027 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:05:29.027 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:29.286 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:05:29.286 13:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:05:34.565 Found 0000:31:00.0 (0x8086 - 0x159b) 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:05:34.565 Found 0000:31:00.1 (0x8086 - 0x159b) 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:05:34.565 Found net devices under 0000:31:00.0: cvl_0_0 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:05:34.565 Found net devices under 0000:31:00.1: cvl_0_1 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:34.565 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:34.566 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:34.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:34.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:05:34.566 00:05:34.566 --- 10.0.0.2 ping statistics --- 00:05:34.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:34.566 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:05:34.566 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:34.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:34.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:05:34.566 00:05:34.566 --- 10.0.0.1 ping statistics --- 00:05:34.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:34.566 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:05:34.566 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:34.566 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:05:34.566 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:34.566 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:34.566 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:34.566 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:34.566 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:34.566 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:34.566 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:34.566 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:05:34.566 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:34.566 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:34.566 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:05:34.566 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=657995 00:05:34.566 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 657995 00:05:34.566 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:05:34.566 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 657995 ']' 00:05:34.566 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.566 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:34.566 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.566 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:34.566 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:05:34.566 [2024-11-06 13:49:13.725889] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:05:34.566 [2024-11-06 13:49:13.725939] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:34.566 [2024-11-06 13:49:13.796646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.566 [2024-11-06 13:49:13.826848] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:34.566 [2024-11-06 13:49:13.826874] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:34.566 [2024-11-06 13:49:13.826880] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:34.566 [2024-11-06 13:49:13.826884] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:34.566 [2024-11-06 13:49:13.826888] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:34.566 [2024-11-06 13:49:13.827356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.825 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:34.825 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:05:34.825 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:34.825 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:34.825 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:05:34.825 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:34.825 13:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:34.825 [2024-11-06 13:49:14.067909] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:34.826 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:05:34.826 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:34.826 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:34.826 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:05:34.826 ************************************ 00:05:34.826 START TEST lvs_grow_clean 00:05:34.826 ************************************ 00:05:34.826 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:05:34.826 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:05:34.826 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:05:34.826 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:05:34.826 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:05:34.826 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:05:34.826 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:05:34.826 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:05:35.085 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:05:35.085 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:05:35.085 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:05:35.085 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:05:35.344 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=a31b9d59-dc9b-4239-9cca-2f6d88697fb5 00:05:35.344 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a31b9d59-dc9b-4239-9cca-2f6d88697fb5 00:05:35.344 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:05:35.344 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:05:35.344 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:05:35.344 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a31b9d59-dc9b-4239-9cca-2f6d88697fb5 lvol 150 00:05:35.602 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=3bddaebe-3c39-468d-b62d-80c13d1c4705 00:05:35.602 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:05:35.602 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:05:35.861 [2024-11-06 13:49:14.900631] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:05:35.861 [2024-11-06 13:49:14.900672] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:05:35.861 true 00:05:35.861 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a31b9d59-dc9b-4239-9cca-2f6d88697fb5 00:05:35.861 13:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:05:35.861 13:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:05:35.861 13:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:36.146 13:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3bddaebe-3c39-468d-b62d-80c13d1c4705 00:05:36.146 13:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:36.404 [2024-11-06 13:49:15.518451] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:36.404 13:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:36.404 13:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=658669 00:05:36.404 13:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:05:36.404 13:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:36.404 13:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 658669 /var/tmp/bdevperf.sock 00:05:36.404 13:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 658669 ']' 00:05:36.404 13:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:05:36.404 13:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:36.404 13:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:05:36.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:05:36.404 13:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:36.404 13:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:05:36.663 [2024-11-06 13:49:15.716360] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:05:36.663 [2024-11-06 13:49:15.716414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid658669 ] 00:05:36.663 [2024-11-06 13:49:15.794046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.663 [2024-11-06 13:49:15.830236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.232 13:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:37.232 13:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:05:37.232 13:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:05:37.490 Nvme0n1 00:05:37.490 13:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:05:37.750 [ 00:05:37.750 { 00:05:37.750 "name": "Nvme0n1", 00:05:37.750 "aliases": [ 00:05:37.750 "3bddaebe-3c39-468d-b62d-80c13d1c4705" 00:05:37.750 ], 00:05:37.750 "product_name": "NVMe disk", 00:05:37.750 "block_size": 4096, 00:05:37.750 "num_blocks": 38912, 00:05:37.750 "uuid": "3bddaebe-3c39-468d-b62d-80c13d1c4705", 00:05:37.750 "numa_id": 0, 00:05:37.750 "assigned_rate_limits": { 00:05:37.750 "rw_ios_per_sec": 0, 00:05:37.750 "rw_mbytes_per_sec": 0, 00:05:37.750 "r_mbytes_per_sec": 0, 00:05:37.750 "w_mbytes_per_sec": 0 00:05:37.750 }, 00:05:37.750 "claimed": false, 00:05:37.750 "zoned": false, 00:05:37.750 "supported_io_types": { 00:05:37.750 "read": true, 00:05:37.750 "write": true, 00:05:37.750 "unmap": true, 00:05:37.750 "flush": true, 00:05:37.750 "reset": true, 00:05:37.750 "nvme_admin": true, 00:05:37.750 "nvme_io": true, 00:05:37.750 "nvme_io_md": false, 00:05:37.750 "write_zeroes": true, 00:05:37.750 "zcopy": false, 00:05:37.750 "get_zone_info": false, 00:05:37.750 "zone_management": false, 00:05:37.750 "zone_append": false, 00:05:37.750 "compare": true, 00:05:37.750 "compare_and_write": true, 00:05:37.750 "abort": true, 00:05:37.750 "seek_hole": false, 00:05:37.750 "seek_data": false, 00:05:37.750 "copy": true, 00:05:37.750 "nvme_iov_md": false 00:05:37.750 }, 00:05:37.750 "memory_domains": [ 00:05:37.750 { 00:05:37.750 "dma_device_id": "system", 00:05:37.750 "dma_device_type": 1 00:05:37.750 } 00:05:37.750 ], 00:05:37.750 "driver_specific": { 00:05:37.750 "nvme": [ 00:05:37.750 { 00:05:37.750 "trid": { 00:05:37.750 "trtype": "TCP", 00:05:37.750 "adrfam": "IPv4", 00:05:37.750 "traddr": "10.0.0.2", 00:05:37.750 "trsvcid": "4420", 00:05:37.750 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:05:37.750 }, 00:05:37.750 "ctrlr_data": { 00:05:37.750 "cntlid": 1, 00:05:37.750 "vendor_id": "0x8086", 00:05:37.750 "model_number": "SPDK bdev Controller", 00:05:37.750 "serial_number": "SPDK0", 00:05:37.750 "firmware_revision": "25.01", 00:05:37.750 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:05:37.750 "oacs": { 00:05:37.750 "security": 0, 00:05:37.750 "format": 0, 00:05:37.750 "firmware": 0, 00:05:37.750 "ns_manage": 0 00:05:37.750 }, 00:05:37.750 "multi_ctrlr": true, 00:05:37.750 "ana_reporting": false 00:05:37.750 }, 00:05:37.750 "vs": { 00:05:37.750 "nvme_version": "1.3" 00:05:37.750 }, 00:05:37.750 "ns_data": { 00:05:37.750 "id": 1, 00:05:37.750 "can_share": true 00:05:37.750 } 00:05:37.750 } 00:05:37.750 ], 00:05:37.750 "mp_policy": "active_passive" 00:05:37.750 } 00:05:37.750 } 00:05:37.750 ] 00:05:37.750 13:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=658859 00:05:37.750 13:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:05:37.750 13:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:05:37.750 Running I/O for 10 seconds... 00:05:39.130 Latency(us) 00:05:39.130 [2024-11-06T12:49:18.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:39.130 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:39.130 Nvme0n1 : 1.00 24588.00 96.05 0.00 0.00 0.00 0.00 0.00 00:05:39.130 [2024-11-06T12:49:18.414Z] =================================================================================================================== 00:05:39.130 [2024-11-06T12:49:18.414Z] Total : 24588.00 96.05 0.00 0.00 0.00 0.00 0.00 00:05:39.130 00:05:39.698 13:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a31b9d59-dc9b-4239-9cca-2f6d88697fb5 00:05:39.957 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:39.957 Nvme0n1 : 2.00 24740.00 96.64 0.00 0.00 0.00 0.00 0.00 00:05:39.957 [2024-11-06T12:49:19.241Z] =================================================================================================================== 00:05:39.957 [2024-11-06T12:49:19.241Z] Total : 24740.00 96.64 0.00 0.00 0.00 0.00 0.00 00:05:39.957 00:05:39.957 true 00:05:39.957 13:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a31b9d59-dc9b-4239-9cca-2f6d88697fb5 00:05:39.957 13:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:05:40.216 13:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:05:40.216 13:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:05:40.216 13:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 658859 00:05:40.786 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:40.786 Nvme0n1 : 3.00 24791.67 96.84 0.00 0.00 0.00 0.00 0.00 00:05:40.786 [2024-11-06T12:49:20.070Z] =================================================================================================================== 00:05:40.786 [2024-11-06T12:49:20.070Z] Total : 24791.67 96.84 0.00 0.00 0.00 0.00 0.00 00:05:40.786 00:05:41.724 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:41.725 Nvme0n1 : 4.00 24817.75 96.94 0.00 0.00 0.00 0.00 0.00 00:05:41.725 [2024-11-06T12:49:21.009Z] =================================================================================================================== 00:05:41.725 [2024-11-06T12:49:21.009Z] Total : 24817.75 96.94 0.00 0.00 0.00 0.00 0.00 00:05:41.725 00:05:43.106 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:43.106 Nvme0n1 : 5.00 24857.80 97.10 0.00 0.00 0.00 0.00 0.00 00:05:43.106 [2024-11-06T12:49:22.390Z] =================================================================================================================== 00:05:43.106 [2024-11-06T12:49:22.390Z] Total : 24857.80 97.10 0.00 0.00 0.00 0.00 0.00 00:05:43.106 00:05:44.044 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:44.044 Nvme0n1 : 6.00 24880.33 97.19 0.00 0.00 0.00 0.00 0.00 00:05:44.044 [2024-11-06T12:49:23.328Z] =================================================================================================================== 00:05:44.044 [2024-11-06T12:49:23.328Z] Total : 24880.33 97.19 0.00 0.00 0.00 0.00 0.00 00:05:44.044 00:05:44.983 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:44.983 Nvme0n1 : 7.00 24905.14 97.29 0.00 0.00 0.00 0.00 0.00 00:05:44.983 [2024-11-06T12:49:24.267Z] =================================================================================================================== 00:05:44.983 [2024-11-06T12:49:24.267Z] Total : 24905.14 97.29 0.00 0.00 0.00 0.00 0.00 00:05:44.983 00:05:45.920 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:45.920 Nvme0n1 : 8.00 24919.75 97.34 0.00 0.00 0.00 0.00 0.00 00:05:45.920 [2024-11-06T12:49:25.204Z] =================================================================================================================== 00:05:45.920 [2024-11-06T12:49:25.204Z] Total : 24919.75 97.34 0.00 0.00 0.00 0.00 0.00 00:05:45.920 00:05:46.989 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:46.989 Nvme0n1 : 9.00 24931.22 97.39 0.00 0.00 0.00 0.00 0.00 00:05:46.989 [2024-11-06T12:49:26.273Z] =================================================================================================================== 00:05:46.989 [2024-11-06T12:49:26.273Z] Total : 24931.22 97.39 0.00 0.00 0.00 0.00 0.00 00:05:46.989 00:05:47.927 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:47.927 Nvme0n1 : 10.00 24946.90 97.45 0.00 0.00 0.00 0.00 0.00 00:05:47.927 [2024-11-06T12:49:27.211Z] =================================================================================================================== 00:05:47.927 [2024-11-06T12:49:27.211Z] Total : 24946.90 97.45 0.00 0.00 0.00 0.00 0.00 00:05:47.927 00:05:47.927 00:05:47.927 Latency(us) 00:05:47.927 [2024-11-06T12:49:27.211Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:47.927 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:47.927 Nvme0n1 : 10.01 24946.49 97.45 0.00 0.00 5127.74 2525.87 10431.15 00:05:47.927 [2024-11-06T12:49:27.211Z] =================================================================================================================== 00:05:47.927 [2024-11-06T12:49:27.211Z] Total : 24946.49 97.45 0.00 0.00 5127.74 2525.87 10431.15 00:05:47.927 { 00:05:47.927 "results": [ 00:05:47.927 { 00:05:47.927 "job": "Nvme0n1", 00:05:47.927 "core_mask": "0x2", 00:05:47.927 "workload": "randwrite", 00:05:47.927 "status": "finished", 00:05:47.927 "queue_depth": 128, 00:05:47.927 "io_size": 4096, 00:05:47.927 "runtime": 10.005296, 00:05:47.927 "iops": 24946.488339775256, 00:05:47.927 "mibps": 97.4472200772471, 00:05:47.927 "io_failed": 0, 00:05:47.927 "io_timeout": 0, 00:05:47.927 "avg_latency_us": 5127.740291322946, 00:05:47.927 "min_latency_us": 2525.866666666667, 00:05:47.927 "max_latency_us": 10431.146666666667 00:05:47.927 } 00:05:47.927 ], 00:05:47.927 "core_count": 1 00:05:47.927 } 00:05:47.927 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 658669 00:05:47.927 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 658669 ']' 00:05:47.927 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 658669 00:05:47.927 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:05:47.927 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:47.927 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 658669 00:05:47.927 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:05:47.927 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:05:47.927 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 658669' 00:05:47.927 killing process with pid 658669 00:05:47.927 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 658669 00:05:47.927 Received shutdown signal, test time was about 10.000000 seconds 00:05:47.927 00:05:47.927 Latency(us) 00:05:47.927 [2024-11-06T12:49:27.211Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:47.927 [2024-11-06T12:49:27.211Z] =================================================================================================================== 00:05:47.927 [2024-11-06T12:49:27.211Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:05:47.927 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 658669 00:05:47.927 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:48.187 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:48.446 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a31b9d59-dc9b-4239-9cca-2f6d88697fb5 00:05:48.446 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:05:48.446 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:05:48.446 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:05:48.446 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:05:48.705 [2024-11-06 13:49:27.804665] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:05:48.706 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a31b9d59-dc9b-4239-9cca-2f6d88697fb5 00:05:48.706 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:05:48.706 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a31b9d59-dc9b-4239-9cca-2f6d88697fb5 00:05:48.706 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:48.706 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.706 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:48.706 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.706 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:48.706 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.706 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:48.706 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:48.706 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a31b9d59-dc9b-4239-9cca-2f6d88697fb5 00:05:48.706 request: 00:05:48.706 { 00:05:48.706 "uuid": "a31b9d59-dc9b-4239-9cca-2f6d88697fb5", 00:05:48.706 "method": "bdev_lvol_get_lvstores", 00:05:48.706 "req_id": 1 00:05:48.706 } 00:05:48.706 Got JSON-RPC error response 00:05:48.706 response: 00:05:48.706 { 00:05:48.706 "code": -19, 00:05:48.706 "message": "No such device" 00:05:48.706 } 00:05:48.965 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:05:48.965 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:48.965 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:48.965 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:48.965 13:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:05:48.965 aio_bdev 00:05:48.965 13:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3bddaebe-3c39-468d-b62d-80c13d1c4705 00:05:48.965 13:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=3bddaebe-3c39-468d-b62d-80c13d1c4705 00:05:48.965 13:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:05:48.965 13:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:05:48.965 13:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:05:48.965 13:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:05:48.965 13:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:05:49.224 13:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3bddaebe-3c39-468d-b62d-80c13d1c4705 -t 2000 00:05:49.224 [ 00:05:49.224 { 00:05:49.224 "name": "3bddaebe-3c39-468d-b62d-80c13d1c4705", 00:05:49.224 "aliases": [ 00:05:49.224 "lvs/lvol" 00:05:49.224 ], 00:05:49.224 "product_name": "Logical Volume", 00:05:49.224 "block_size": 4096, 00:05:49.224 "num_blocks": 38912, 00:05:49.224 "uuid": "3bddaebe-3c39-468d-b62d-80c13d1c4705", 00:05:49.224 "assigned_rate_limits": { 00:05:49.224 "rw_ios_per_sec": 0, 00:05:49.224 "rw_mbytes_per_sec": 0, 00:05:49.224 "r_mbytes_per_sec": 0, 00:05:49.224 "w_mbytes_per_sec": 0 00:05:49.224 }, 00:05:49.224 "claimed": false, 00:05:49.224 "zoned": false, 00:05:49.224 "supported_io_types": { 00:05:49.224 "read": true, 00:05:49.224 "write": true, 00:05:49.224 "unmap": true, 00:05:49.224 "flush": false, 00:05:49.224 "reset": true, 00:05:49.224 "nvme_admin": false, 00:05:49.224 "nvme_io": false, 00:05:49.224 "nvme_io_md": false, 00:05:49.224 "write_zeroes": true, 00:05:49.224 "zcopy": false, 00:05:49.224 "get_zone_info": false, 00:05:49.224 "zone_management": false, 00:05:49.224 "zone_append": false, 00:05:49.224 "compare": false, 00:05:49.224 "compare_and_write": false, 00:05:49.224 "abort": false, 00:05:49.224 "seek_hole": true, 00:05:49.224 "seek_data": true, 00:05:49.224 "copy": false, 00:05:49.224 "nvme_iov_md": false 00:05:49.224 }, 00:05:49.224 "driver_specific": { 00:05:49.224 "lvol": { 00:05:49.224 "lvol_store_uuid": "a31b9d59-dc9b-4239-9cca-2f6d88697fb5", 00:05:49.224 "base_bdev": "aio_bdev", 00:05:49.224 "thin_provision": false, 00:05:49.224 "num_allocated_clusters": 38, 00:05:49.224 "snapshot": false, 00:05:49.224 "clone": false, 00:05:49.224 "esnap_clone": false 00:05:49.224 } 00:05:49.224 } 00:05:49.224 } 00:05:49.224 ] 00:05:49.224 13:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:05:49.225 13:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a31b9d59-dc9b-4239-9cca-2f6d88697fb5 00:05:49.225 13:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:05:49.483 13:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:05:49.483 13:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:05:49.483 13:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a31b9d59-dc9b-4239-9cca-2f6d88697fb5 00:05:49.483 13:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:05:49.483 13:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3bddaebe-3c39-468d-b62d-80c13d1c4705 00:05:49.743 13:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a31b9d59-dc9b-4239-9cca-2f6d88697fb5 00:05:50.002 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:05:50.002 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:05:50.002 00:05:50.002 real 0m15.149s 00:05:50.002 user 0m14.843s 00:05:50.002 sys 0m1.142s 00:05:50.002 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:50.002 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:05:50.002 ************************************ 00:05:50.002 END TEST lvs_grow_clean 00:05:50.002 ************************************ 00:05:50.002 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:05:50.002 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:50.002 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:50.002 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:05:50.261 ************************************ 00:05:50.261 START TEST lvs_grow_dirty 00:05:50.261 ************************************ 00:05:50.261 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:05:50.261 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:05:50.261 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:05:50.261 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:05:50.261 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:05:50.261 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:05:50.261 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:05:50.261 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:05:50.261 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:05:50.261 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:05:50.261 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:05:50.261 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:05:50.521 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=fc5eae88-23e8-40d4-8830-9d777673ef93 00:05:50.521 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc5eae88-23e8-40d4-8830-9d777673ef93 00:05:50.521 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:05:50.521 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:05:50.521 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:05:50.521 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fc5eae88-23e8-40d4-8830-9d777673ef93 lvol 150 00:05:50.780 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=2a856473-7b4e-46af-acc8-064b9cfcf9bd 00:05:50.780 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:05:50.780 13:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:05:51.038 [2024-11-06 13:49:30.102907] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:05:51.038 [2024-11-06 13:49:30.102956] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:05:51.038 true 00:05:51.038 13:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc5eae88-23e8-40d4-8830-9d777673ef93 00:05:51.038 13:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:05:51.038 13:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:05:51.038 13:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:51.297 13:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2a856473-7b4e-46af-acc8-064b9cfcf9bd 00:05:51.555 13:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:51.555 [2024-11-06 13:49:30.728701] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:51.555 13:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:51.814 13:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:05:51.814 13:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=662092 00:05:51.814 13:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:51.814 13:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 662092 /var/tmp/bdevperf.sock 00:05:51.814 13:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 662092 ']' 00:05:51.814 13:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:05:51.814 13:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:51.814 13:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:05:51.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:05:51.814 13:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:51.814 13:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:05:51.814 [2024-11-06 13:49:30.916141] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:05:51.814 [2024-11-06 13:49:30.916181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid662092 ] 00:05:51.814 [2024-11-06 13:49:30.971571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.814 [2024-11-06 13:49:31.001510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.814 13:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:51.814 13:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:05:51.814 13:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:05:52.073 Nvme0n1 00:05:52.332 13:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:05:52.332 [ 00:05:52.332 { 00:05:52.332 "name": "Nvme0n1", 00:05:52.332 "aliases": [ 00:05:52.332 "2a856473-7b4e-46af-acc8-064b9cfcf9bd" 00:05:52.332 ], 00:05:52.332 "product_name": "NVMe disk", 00:05:52.332 "block_size": 4096, 00:05:52.332 "num_blocks": 38912, 00:05:52.332 "uuid": "2a856473-7b4e-46af-acc8-064b9cfcf9bd", 00:05:52.332 "numa_id": 0, 00:05:52.332 "assigned_rate_limits": { 00:05:52.332 "rw_ios_per_sec": 0, 00:05:52.332 "rw_mbytes_per_sec": 0, 00:05:52.332 "r_mbytes_per_sec": 0, 00:05:52.332 "w_mbytes_per_sec": 0 00:05:52.332 }, 00:05:52.332 "claimed": false, 00:05:52.332 "zoned": false, 00:05:52.332 "supported_io_types": { 00:05:52.332 "read": true, 00:05:52.332 "write": true, 00:05:52.332 "unmap": true, 00:05:52.332 "flush": true, 00:05:52.332 "reset": true, 00:05:52.332 "nvme_admin": true, 00:05:52.332 "nvme_io": true, 00:05:52.332 "nvme_io_md": false, 00:05:52.332 "write_zeroes": true, 00:05:52.332 "zcopy": false, 00:05:52.332 "get_zone_info": false, 00:05:52.332 "zone_management": false, 00:05:52.332 "zone_append": false, 00:05:52.332 "compare": true, 00:05:52.332 "compare_and_write": true, 00:05:52.332 "abort": true, 00:05:52.332 "seek_hole": false, 00:05:52.332 "seek_data": false, 00:05:52.332 "copy": true, 00:05:52.332 "nvme_iov_md": false 00:05:52.332 }, 00:05:52.332 "memory_domains": [ 00:05:52.332 { 00:05:52.332 "dma_device_id": "system", 00:05:52.332 "dma_device_type": 1 00:05:52.332 } 00:05:52.332 ], 00:05:52.332 "driver_specific": { 00:05:52.332 "nvme": [ 00:05:52.332 { 00:05:52.332 "trid": { 00:05:52.332 "trtype": "TCP", 00:05:52.332 "adrfam": "IPv4", 00:05:52.332 "traddr": "10.0.0.2", 00:05:52.332 "trsvcid": "4420", 00:05:52.332 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:05:52.332 }, 00:05:52.332 "ctrlr_data": { 00:05:52.332 "cntlid": 1, 00:05:52.332 "vendor_id": "0x8086", 00:05:52.332 "model_number": "SPDK bdev Controller", 00:05:52.332 "serial_number": "SPDK0", 00:05:52.332 "firmware_revision": "25.01", 00:05:52.332 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:05:52.332 "oacs": { 00:05:52.332 "security": 0, 00:05:52.332 "format": 0, 00:05:52.332 "firmware": 0, 00:05:52.332 "ns_manage": 0 00:05:52.332 }, 00:05:52.332 "multi_ctrlr": true, 00:05:52.332 "ana_reporting": false 00:05:52.332 }, 00:05:52.332 "vs": { 00:05:52.332 "nvme_version": "1.3" 00:05:52.332 }, 00:05:52.332 "ns_data": { 00:05:52.332 "id": 1, 00:05:52.332 "can_share": true 00:05:52.332 } 00:05:52.332 } 00:05:52.332 ], 00:05:52.332 "mp_policy": "active_passive" 00:05:52.332 } 00:05:52.332 } 00:05:52.332 ] 00:05:52.332 13:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=662099 00:05:52.332 13:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:05:52.332 13:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:05:52.332 Running I/O for 10 seconds... 00:05:53.710 Latency(us) 00:05:53.710 [2024-11-06T12:49:32.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:53.711 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:53.711 Nvme0n1 : 1.00 24513.00 95.75 0.00 0.00 0.00 0.00 0.00 00:05:53.711 [2024-11-06T12:49:32.995Z] =================================================================================================================== 00:05:53.711 [2024-11-06T12:49:32.995Z] Total : 24513.00 95.75 0.00 0.00 0.00 0.00 0.00 00:05:53.711 00:05:54.280 13:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fc5eae88-23e8-40d4-8830-9d777673ef93 00:05:54.541 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:54.541 Nvme0n1 : 2.00 24698.00 96.48 0.00 0.00 0.00 0.00 0.00 00:05:54.541 [2024-11-06T12:49:33.825Z] =================================================================================================================== 00:05:54.541 [2024-11-06T12:49:33.825Z] Total : 24698.00 96.48 0.00 0.00 0.00 0.00 0.00 00:05:54.541 00:05:54.541 true 00:05:54.541 13:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:05:54.541 13:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc5eae88-23e8-40d4-8830-9d777673ef93 00:05:54.800 13:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:05:54.800 13:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:05:54.800 13:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 662099 00:05:55.370 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:55.370 Nvme0n1 : 3.00 24785.00 96.82 0.00 0.00 0.00 0.00 0.00 00:05:55.370 [2024-11-06T12:49:34.654Z] =================================================================================================================== 00:05:55.370 [2024-11-06T12:49:34.654Z] Total : 24785.00 96.82 0.00 0.00 0.00 0.00 0.00 00:05:55.370 00:05:56.753 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:56.753 Nvme0n1 : 4.00 24829.00 96.99 0.00 0.00 0.00 0.00 0.00 00:05:56.753 [2024-11-06T12:49:36.037Z] =================================================================================================================== 00:05:56.753 [2024-11-06T12:49:36.037Z] Total : 24829.00 96.99 0.00 0.00 0.00 0.00 0.00 00:05:56.753 00:05:57.693 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:57.693 Nvme0n1 : 5.00 24867.20 97.14 0.00 0.00 0.00 0.00 0.00 00:05:57.693 [2024-11-06T12:49:36.977Z] =================================================================================================================== 00:05:57.693 [2024-11-06T12:49:36.977Z] Total : 24867.20 97.14 0.00 0.00 0.00 0.00 0.00 00:05:57.693 00:05:58.634 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:58.634 Nvme0n1 : 6.00 24893.00 97.24 0.00 0.00 0.00 0.00 0.00 00:05:58.634 [2024-11-06T12:49:37.918Z] =================================================================================================================== 00:05:58.634 [2024-11-06T12:49:37.918Z] Total : 24893.00 97.24 0.00 0.00 0.00 0.00 0.00 00:05:58.634 00:05:59.575 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:59.575 Nvme0n1 : 7.00 24920.14 97.34 0.00 0.00 0.00 0.00 0.00 00:05:59.575 [2024-11-06T12:49:38.859Z] =================================================================================================================== 00:05:59.575 [2024-11-06T12:49:38.859Z] Total : 24920.14 97.34 0.00 0.00 0.00 0.00 0.00 00:05:59.575 00:06:00.516 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:00.516 Nvme0n1 : 8.00 24937.00 97.41 0.00 0.00 0.00 0.00 0.00 00:06:00.516 [2024-11-06T12:49:39.800Z] =================================================================================================================== 00:06:00.516 [2024-11-06T12:49:39.800Z] Total : 24937.00 97.41 0.00 0.00 0.00 0.00 0.00 00:06:00.516 00:06:01.455 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:01.455 Nvme0n1 : 9.00 24945.44 97.44 0.00 0.00 0.00 0.00 0.00 00:06:01.455 [2024-11-06T12:49:40.739Z] =================================================================================================================== 00:06:01.455 [2024-11-06T12:49:40.739Z] Total : 24945.44 97.44 0.00 0.00 0.00 0.00 0.00 00:06:01.455 00:06:02.394 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:02.394 Nvme0n1 : 10.00 24959.10 97.50 0.00 0.00 0.00 0.00 0.00 00:06:02.394 [2024-11-06T12:49:41.678Z] =================================================================================================================== 00:06:02.394 [2024-11-06T12:49:41.678Z] Total : 24959.10 97.50 0.00 0.00 0.00 0.00 0.00 00:06:02.394 00:06:02.394 00:06:02.394 Latency(us) 00:06:02.394 [2024-11-06T12:49:41.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:02.394 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:02.394 Nvme0n1 : 10.00 24956.85 97.49 0.00 0.00 5125.77 3126.61 12997.97 00:06:02.394 [2024-11-06T12:49:41.678Z] =================================================================================================================== 00:06:02.394 [2024-11-06T12:49:41.678Z] Total : 24956.85 97.49 0.00 0.00 5125.77 3126.61 12997.97 00:06:02.394 { 00:06:02.394 "results": [ 00:06:02.394 { 00:06:02.394 "job": "Nvme0n1", 00:06:02.394 "core_mask": "0x2", 00:06:02.394 "workload": "randwrite", 00:06:02.394 "status": "finished", 00:06:02.394 "queue_depth": 128, 00:06:02.394 "io_size": 4096, 00:06:02.394 "runtime": 10.003424, 00:06:02.394 "iops": 24956.85477292575, 00:06:02.394 "mibps": 97.48771395674122, 00:06:02.394 "io_failed": 0, 00:06:02.394 "io_timeout": 0, 00:06:02.394 "avg_latency_us": 5125.774942066486, 00:06:02.394 "min_latency_us": 3126.6133333333332, 00:06:02.394 "max_latency_us": 12997.973333333333 00:06:02.394 } 00:06:02.394 ], 00:06:02.394 "core_count": 1 00:06:02.394 } 00:06:02.394 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 662092 00:06:02.394 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 662092 ']' 00:06:02.394 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 662092 00:06:02.394 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:06:02.394 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:02.394 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 662092 00:06:02.655 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:02.655 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:02.655 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 662092' 00:06:02.655 killing process with pid 662092 00:06:02.655 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 662092 00:06:02.655 Received shutdown signal, test time was about 10.000000 seconds 00:06:02.655 00:06:02.655 Latency(us) 00:06:02.655 [2024-11-06T12:49:41.939Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:02.655 [2024-11-06T12:49:41.939Z] =================================================================================================================== 00:06:02.655 [2024-11-06T12:49:41.939Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:02.655 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 662092 00:06:02.655 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:02.913 13:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:02.913 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc5eae88-23e8-40d4-8830-9d777673ef93 00:06:02.914 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:06:03.172 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:06:03.173 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:06:03.173 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 657995 00:06:03.173 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 657995 00:06:03.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 657995 Killed "${NVMF_APP[@]}" "$@" 00:06:03.173 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:06:03.173 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:06:03.173 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:03.173 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:03.173 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:06:03.173 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=664618 00:06:03.173 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 664618 00:06:03.173 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 664618 ']' 00:06:03.173 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.173 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:03.173 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.173 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:03.173 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:06:03.173 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:03.173 [2024-11-06 13:49:42.361059] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:06:03.173 [2024-11-06 13:49:42.361101] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:03.173 [2024-11-06 13:49:42.420710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.173 [2024-11-06 13:49:42.449663] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:03.173 [2024-11-06 13:49:42.449690] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:03.173 [2024-11-06 13:49:42.449696] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:03.173 [2024-11-06 13:49:42.449701] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:03.173 [2024-11-06 13:49:42.449705] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:03.173 [2024-11-06 13:49:42.450185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.432 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:03.432 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:06:03.432 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:03.432 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:03.432 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:06:03.432 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:03.432 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:03.432 [2024-11-06 13:49:42.687524] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:06:03.432 [2024-11-06 13:49:42.687597] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:06:03.432 [2024-11-06 13:49:42.687621] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:06:03.432 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:06:03.432 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 2a856473-7b4e-46af-acc8-064b9cfcf9bd 00:06:03.432 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=2a856473-7b4e-46af-acc8-064b9cfcf9bd 00:06:03.432 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:06:03.432 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:06:03.432 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:06:03.432 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:06:03.432 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:06:03.691 13:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2a856473-7b4e-46af-acc8-064b9cfcf9bd -t 2000 00:06:03.957 [ 00:06:03.957 { 00:06:03.957 "name": "2a856473-7b4e-46af-acc8-064b9cfcf9bd", 00:06:03.957 "aliases": [ 00:06:03.957 "lvs/lvol" 00:06:03.957 ], 00:06:03.957 "product_name": "Logical Volume", 00:06:03.957 "block_size": 4096, 00:06:03.957 "num_blocks": 38912, 00:06:03.957 "uuid": "2a856473-7b4e-46af-acc8-064b9cfcf9bd", 00:06:03.957 "assigned_rate_limits": { 00:06:03.958 "rw_ios_per_sec": 0, 00:06:03.958 "rw_mbytes_per_sec": 0, 00:06:03.958 "r_mbytes_per_sec": 0, 00:06:03.958 "w_mbytes_per_sec": 0 00:06:03.958 }, 00:06:03.958 "claimed": false, 00:06:03.958 "zoned": false, 00:06:03.958 "supported_io_types": { 00:06:03.958 "read": true, 00:06:03.958 "write": true, 00:06:03.958 "unmap": true, 00:06:03.958 "flush": false, 00:06:03.958 "reset": true, 00:06:03.958 "nvme_admin": false, 00:06:03.958 "nvme_io": false, 00:06:03.958 "nvme_io_md": false, 00:06:03.958 "write_zeroes": true, 00:06:03.958 "zcopy": false, 00:06:03.958 "get_zone_info": false, 00:06:03.958 "zone_management": false, 00:06:03.958 "zone_append": false, 00:06:03.958 "compare": false, 00:06:03.958 "compare_and_write": false, 00:06:03.958 "abort": false, 00:06:03.958 "seek_hole": true, 00:06:03.958 "seek_data": true, 00:06:03.958 "copy": false, 00:06:03.958 "nvme_iov_md": false 00:06:03.958 }, 00:06:03.958 "driver_specific": { 00:06:03.958 "lvol": { 00:06:03.958 "lvol_store_uuid": "fc5eae88-23e8-40d4-8830-9d777673ef93", 00:06:03.958 "base_bdev": "aio_bdev", 00:06:03.958 "thin_provision": false, 00:06:03.958 "num_allocated_clusters": 38, 00:06:03.958 "snapshot": false, 00:06:03.958 "clone": false, 00:06:03.958 "esnap_clone": false 00:06:03.958 } 00:06:03.958 } 00:06:03.958 } 00:06:03.958 ] 00:06:03.958 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:06:03.958 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc5eae88-23e8-40d4-8830-9d777673ef93 00:06:03.958 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:06:03.958 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:06:03.958 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc5eae88-23e8-40d4-8830-9d777673ef93 00:06:03.958 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:06:04.220 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:06:04.220 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:04.220 [2024-11-06 13:49:43.455949] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:06:04.220 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc5eae88-23e8-40d4-8830-9d777673ef93 00:06:04.220 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:06:04.220 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc5eae88-23e8-40d4-8830-9d777673ef93 00:06:04.220 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:04.220 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.220 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:04.220 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.220 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:04.220 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.220 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:04.220 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:04.220 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc5eae88-23e8-40d4-8830-9d777673ef93 00:06:04.480 request: 00:06:04.480 { 00:06:04.480 "uuid": "fc5eae88-23e8-40d4-8830-9d777673ef93", 00:06:04.480 "method": "bdev_lvol_get_lvstores", 00:06:04.480 "req_id": 1 00:06:04.480 } 00:06:04.480 Got JSON-RPC error response 00:06:04.480 response: 00:06:04.480 { 00:06:04.480 "code": -19, 00:06:04.480 "message": "No such device" 00:06:04.480 } 00:06:04.480 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:06:04.480 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:04.480 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:04.480 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:04.480 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:04.740 aio_bdev 00:06:04.740 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2a856473-7b4e-46af-acc8-064b9cfcf9bd 00:06:04.740 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=2a856473-7b4e-46af-acc8-064b9cfcf9bd 00:06:04.740 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:06:04.740 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:06:04.740 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:06:04.740 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:06:04.740 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:06:04.740 13:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2a856473-7b4e-46af-acc8-064b9cfcf9bd -t 2000 00:06:04.999 [ 00:06:04.999 { 00:06:04.999 "name": "2a856473-7b4e-46af-acc8-064b9cfcf9bd", 00:06:04.999 "aliases": [ 00:06:04.999 "lvs/lvol" 00:06:04.999 ], 00:06:04.999 "product_name": "Logical Volume", 00:06:04.999 "block_size": 4096, 00:06:04.999 "num_blocks": 38912, 00:06:04.999 "uuid": "2a856473-7b4e-46af-acc8-064b9cfcf9bd", 00:06:04.999 "assigned_rate_limits": { 00:06:04.999 "rw_ios_per_sec": 0, 00:06:04.999 "rw_mbytes_per_sec": 0, 00:06:04.999 "r_mbytes_per_sec": 0, 00:06:04.999 "w_mbytes_per_sec": 0 00:06:04.999 }, 00:06:04.999 "claimed": false, 00:06:04.999 "zoned": false, 00:06:04.999 "supported_io_types": { 00:06:04.999 "read": true, 00:06:04.999 "write": true, 00:06:04.999 "unmap": true, 00:06:04.999 "flush": false, 00:06:04.999 "reset": true, 00:06:04.999 "nvme_admin": false, 00:06:04.999 "nvme_io": false, 00:06:04.999 "nvme_io_md": false, 00:06:04.999 "write_zeroes": true, 00:06:04.999 "zcopy": false, 00:06:04.999 "get_zone_info": false, 00:06:04.999 "zone_management": false, 00:06:04.999 "zone_append": false, 00:06:04.999 "compare": false, 00:06:04.999 "compare_and_write": false, 00:06:04.999 "abort": false, 00:06:04.999 "seek_hole": true, 00:06:04.999 "seek_data": true, 00:06:04.999 "copy": false, 00:06:04.999 "nvme_iov_md": false 00:06:04.999 }, 00:06:04.999 "driver_specific": { 00:06:04.999 "lvol": { 00:06:04.999 "lvol_store_uuid": "fc5eae88-23e8-40d4-8830-9d777673ef93", 00:06:04.999 "base_bdev": "aio_bdev", 00:06:04.999 "thin_provision": false, 00:06:04.999 "num_allocated_clusters": 38, 00:06:04.999 "snapshot": false, 00:06:04.999 "clone": false, 00:06:04.999 "esnap_clone": false 00:06:05.000 } 00:06:05.000 } 00:06:05.000 } 00:06:05.000 ] 00:06:05.000 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:06:05.000 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:06:05.000 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc5eae88-23e8-40d4-8830-9d777673ef93 00:06:05.000 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:06:05.000 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc5eae88-23e8-40d4-8830-9d777673ef93 00:06:05.000 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:06:05.259 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:06:05.259 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2a856473-7b4e-46af-acc8-064b9cfcf9bd 00:06:05.519 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fc5eae88-23e8-40d4-8830-9d777673ef93 00:06:05.519 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:05.778 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:05.778 00:06:05.778 real 0m15.620s 00:06:05.778 user 0m41.925s 00:06:05.778 sys 0m2.661s 00:06:05.778 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:05.778 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:06:05.778 ************************************ 00:06:05.778 END TEST lvs_grow_dirty 00:06:05.778 ************************************ 00:06:05.778 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:06:05.778 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:06:05.778 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:06:05.778 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:06:05.778 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:06:05.778 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:06:05.778 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:06:05.778 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:06:05.778 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:06:05.778 nvmf_trace.0 00:06:05.778 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:06:05.778 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:06:05.778 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:05.778 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:06:05.778 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:05.778 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:06:05.778 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:05.778 13:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:05.778 rmmod nvme_tcp 00:06:05.778 rmmod nvme_fabrics 00:06:05.778 rmmod nvme_keyring 00:06:05.778 13:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:05.778 13:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:06:05.778 13:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:06:05.778 13:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 664618 ']' 00:06:05.778 13:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 664618 00:06:05.778 13:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 664618 ']' 00:06:05.778 13:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 664618 00:06:05.778 13:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:06:05.778 13:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:05.778 13:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 664618 00:06:06.038 13:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:06.038 13:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:06.038 13:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 664618' 00:06:06.038 killing process with pid 664618 00:06:06.038 13:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 664618 00:06:06.038 13:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 664618 00:06:06.038 13:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:06.038 13:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:06.038 13:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:06.038 13:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:06:06.038 13:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:06:06.038 13:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:06:06.038 13:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:06.038 13:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:06.038 13:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:06.038 13:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:06.038 13:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:06.038 13:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:08.574 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:08.574 00:06:08.574 real 0m39.065s 00:06:08.574 user 1m1.272s 00:06:08.574 sys 0m8.104s 00:06:08.574 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:08.574 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:08.574 ************************************ 00:06:08.574 END TEST nvmf_lvs_grow 00:06:08.574 ************************************ 00:06:08.574 13:49:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:06:08.574 13:49:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:08.574 13:49:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:08.574 13:49:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:08.574 ************************************ 00:06:08.574 START TEST nvmf_bdev_io_wait 00:06:08.574 ************************************ 00:06:08.574 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:06:08.574 * Looking for test storage... 00:06:08.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:08.574 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:08.574 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:06:08.574 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:08.574 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:08.574 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.574 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.574 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.574 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.574 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.574 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.574 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.574 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.574 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.574 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.574 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:08.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.575 --rc genhtml_branch_coverage=1 00:06:08.575 --rc genhtml_function_coverage=1 00:06:08.575 --rc genhtml_legend=1 00:06:08.575 --rc geninfo_all_blocks=1 00:06:08.575 --rc geninfo_unexecuted_blocks=1 00:06:08.575 00:06:08.575 ' 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:08.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.575 --rc genhtml_branch_coverage=1 00:06:08.575 --rc genhtml_function_coverage=1 00:06:08.575 --rc genhtml_legend=1 00:06:08.575 --rc geninfo_all_blocks=1 00:06:08.575 --rc geninfo_unexecuted_blocks=1 00:06:08.575 00:06:08.575 ' 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:08.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.575 --rc genhtml_branch_coverage=1 00:06:08.575 --rc genhtml_function_coverage=1 00:06:08.575 --rc genhtml_legend=1 00:06:08.575 --rc geninfo_all_blocks=1 00:06:08.575 --rc geninfo_unexecuted_blocks=1 00:06:08.575 00:06:08.575 ' 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:08.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.575 --rc genhtml_branch_coverage=1 00:06:08.575 --rc genhtml_function_coverage=1 00:06:08.575 --rc genhtml_legend=1 00:06:08.575 --rc geninfo_all_blocks=1 00:06:08.575 --rc geninfo_unexecuted_blocks=1 00:06:08.575 00:06:08.575 ' 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:08.575 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:08.575 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:06:08.576 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:08.576 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:08.576 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:08.576 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:08.576 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:08.576 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:08.576 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:08.576 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:08.576 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:08.576 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:08.576 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:06:08.576 13:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:13.850 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:13.850 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:06:13.850 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:13.850 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:13.850 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:13.850 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:13.851 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:13.851 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:13.851 Found net devices under 0000:31:00.0: cvl_0_0 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:13.851 Found net devices under 0000:31:00.1: cvl_0_1 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:13.851 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:13.851 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.500 ms 00:06:13.851 00:06:13.851 --- 10.0.0.2 ping statistics --- 00:06:13.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:13.851 rtt min/avg/max/mdev = 0.500/0.500/0.500/0.000 ms 00:06:13.851 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:13.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:13.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:06:13.851 00:06:13.851 --- 10.0.0.1 ping statistics --- 00:06:13.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:13.851 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:06:13.852 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:13.852 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:06:13.852 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:13.852 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:13.852 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:13.852 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:13.852 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:13.852 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:13.852 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:13.852 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:06:13.852 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:13.852 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:13.852 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:13.852 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=669639 00:06:13.852 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 669639 00:06:13.852 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 669639 ']' 00:06:13.852 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.852 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:13.852 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.852 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:13.852 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:13.852 13:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:06:13.852 [2024-11-06 13:49:52.999535] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:06:13.852 [2024-11-06 13:49:52.999596] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:13.852 [2024-11-06 13:49:53.091356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:14.111 [2024-11-06 13:49:53.145795] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:14.111 [2024-11-06 13:49:53.145848] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:14.111 [2024-11-06 13:49:53.145857] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:14.111 [2024-11-06 13:49:53.145863] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:14.111 [2024-11-06 13:49:53.145870] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:14.111 [2024-11-06 13:49:53.147884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.111 [2024-11-06 13:49:53.148085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.111 [2024-11-06 13:49:53.148086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.111 [2024-11-06 13:49:53.147919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:14.681 [2024-11-06 13:49:53.876482] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:14.681 Malloc0 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:14.681 [2024-11-06 13:49:53.916739] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=669876 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=669878 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=669879 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:06:14.681 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=669881 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:14.682 { 00:06:14.682 "params": { 00:06:14.682 "name": "Nvme$subsystem", 00:06:14.682 "trtype": "$TEST_TRANSPORT", 00:06:14.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:14.682 "adrfam": "ipv4", 00:06:14.682 "trsvcid": "$NVMF_PORT", 00:06:14.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:14.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:14.682 "hdgst": ${hdgst:-false}, 00:06:14.682 "ddgst": ${ddgst:-false} 00:06:14.682 }, 00:06:14.682 "method": "bdev_nvme_attach_controller" 00:06:14.682 } 00:06:14.682 EOF 00:06:14.682 )") 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:14.682 { 00:06:14.682 "params": { 00:06:14.682 "name": "Nvme$subsystem", 00:06:14.682 "trtype": "$TEST_TRANSPORT", 00:06:14.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:14.682 "adrfam": "ipv4", 00:06:14.682 "trsvcid": "$NVMF_PORT", 00:06:14.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:14.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:14.682 "hdgst": ${hdgst:-false}, 00:06:14.682 "ddgst": ${ddgst:-false} 00:06:14.682 }, 00:06:14.682 "method": "bdev_nvme_attach_controller" 00:06:14.682 } 00:06:14.682 EOF 00:06:14.682 )") 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:14.682 { 00:06:14.682 "params": { 00:06:14.682 "name": "Nvme$subsystem", 00:06:14.682 "trtype": "$TEST_TRANSPORT", 00:06:14.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:14.682 "adrfam": "ipv4", 00:06:14.682 "trsvcid": "$NVMF_PORT", 00:06:14.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:14.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:14.682 "hdgst": ${hdgst:-false}, 00:06:14.682 "ddgst": ${ddgst:-false} 00:06:14.682 }, 00:06:14.682 "method": "bdev_nvme_attach_controller" 00:06:14.682 } 00:06:14.682 EOF 00:06:14.682 )") 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:14.682 { 00:06:14.682 "params": { 00:06:14.682 "name": "Nvme$subsystem", 00:06:14.682 "trtype": "$TEST_TRANSPORT", 00:06:14.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:14.682 "adrfam": "ipv4", 00:06:14.682 "trsvcid": "$NVMF_PORT", 00:06:14.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:14.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:14.682 "hdgst": ${hdgst:-false}, 00:06:14.682 "ddgst": ${ddgst:-false} 00:06:14.682 }, 00:06:14.682 "method": "bdev_nvme_attach_controller" 00:06:14.682 } 00:06:14.682 EOF 00:06:14.682 )") 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 669876 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:14.682 "params": { 00:06:14.682 "name": "Nvme1", 00:06:14.682 "trtype": "tcp", 00:06:14.682 "traddr": "10.0.0.2", 00:06:14.682 "adrfam": "ipv4", 00:06:14.682 "trsvcid": "4420", 00:06:14.682 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:06:14.682 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:06:14.682 "hdgst": false, 00:06:14.682 "ddgst": false 00:06:14.682 }, 00:06:14.682 "method": "bdev_nvme_attach_controller" 00:06:14.682 }' 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:14.682 "params": { 00:06:14.682 "name": "Nvme1", 00:06:14.682 "trtype": "tcp", 00:06:14.682 "traddr": "10.0.0.2", 00:06:14.682 "adrfam": "ipv4", 00:06:14.682 "trsvcid": "4420", 00:06:14.682 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:06:14.682 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:06:14.682 "hdgst": false, 00:06:14.682 "ddgst": false 00:06:14.682 }, 00:06:14.682 "method": "bdev_nvme_attach_controller" 00:06:14.682 }' 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:14.682 "params": { 00:06:14.682 "name": "Nvme1", 00:06:14.682 "trtype": "tcp", 00:06:14.682 "traddr": "10.0.0.2", 00:06:14.682 "adrfam": "ipv4", 00:06:14.682 "trsvcid": "4420", 00:06:14.682 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:06:14.682 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:06:14.682 "hdgst": false, 00:06:14.682 "ddgst": false 00:06:14.682 }, 00:06:14.682 "method": "bdev_nvme_attach_controller" 00:06:14.682 }' 00:06:14.682 13:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:14.682 "params": { 00:06:14.682 "name": "Nvme1", 00:06:14.683 "trtype": "tcp", 00:06:14.683 "traddr": "10.0.0.2", 00:06:14.683 "adrfam": "ipv4", 00:06:14.683 "trsvcid": "4420", 00:06:14.683 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:06:14.683 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:06:14.683 "hdgst": false, 00:06:14.683 "ddgst": false 00:06:14.683 }, 00:06:14.683 "method": "bdev_nvme_attach_controller" 00:06:14.683 }' 00:06:14.683 [2024-11-06 13:49:53.954948] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:06:14.683 [2024-11-06 13:49:53.954949] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:06:14.683 [2024-11-06 13:49:53.954999] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-06 13:49:53.954999] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:06:14.683 --proc-type=auto ] 00:06:14.683 [2024-11-06 13:49:53.956249] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:06:14.683 [2024-11-06 13:49:53.956251] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:06:14.683 [2024-11-06 13:49:53.956296] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-06 13:49:53.956297] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:06:14.683 --proc-type=auto ] 00:06:14.943 [2024-11-06 13:49:54.111858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.943 [2024-11-06 13:49:54.140730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:14.943 [2024-11-06 13:49:54.163788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.943 [2024-11-06 13:49:54.194229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:06:14.943 [2024-11-06 13:49:54.202236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.204 [2024-11-06 13:49:54.230624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:06:15.204 [2024-11-06 13:49:54.241632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.204 [2024-11-06 13:49:54.270275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:06:15.204 Running I/O for 1 seconds... 00:06:15.204 Running I/O for 1 seconds... 00:06:15.204 Running I/O for 1 seconds... 00:06:15.465 Running I/O for 1 seconds... 00:06:16.407 18388.00 IOPS, 71.83 MiB/s 00:06:16.407 Latency(us) 00:06:16.407 [2024-11-06T12:49:55.691Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:16.407 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:06:16.407 Nvme1n1 : 1.01 18444.42 72.05 0.00 0.00 6920.21 3604.48 13653.33 00:06:16.407 [2024-11-06T12:49:55.691Z] =================================================================================================================== 00:06:16.407 [2024-11-06T12:49:55.691Z] Total : 18444.42 72.05 0.00 0.00 6920.21 3604.48 13653.33 00:06:16.407 183496.00 IOPS, 716.78 MiB/s 00:06:16.407 Latency(us) 00:06:16.407 [2024-11-06T12:49:55.691Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:16.407 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:06:16.407 Nvme1n1 : 1.00 183126.58 715.34 0.00 0.00 695.09 312.32 2020.69 00:06:16.407 [2024-11-06T12:49:55.691Z] =================================================================================================================== 00:06:16.407 [2024-11-06T12:49:55.691Z] Total : 183126.58 715.34 0.00 0.00 695.09 312.32 2020.69 00:06:16.407 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 669878 00:06:16.407 17862.00 IOPS, 69.77 MiB/s 00:06:16.407 Latency(us) 00:06:16.407 [2024-11-06T12:49:55.691Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:16.407 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:06:16.407 Nvme1n1 : 1.01 17936.31 70.06 0.00 0.00 7118.24 3085.65 16384.00 00:06:16.407 [2024-11-06T12:49:55.691Z] =================================================================================================================== 00:06:16.407 [2024-11-06T12:49:55.691Z] Total : 17936.31 70.06 0.00 0.00 7118.24 3085.65 16384.00 00:06:16.407 11188.00 IOPS, 43.70 MiB/s 00:06:16.407 Latency(us) 00:06:16.407 [2024-11-06T12:49:55.691Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:16.407 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:06:16.407 Nvme1n1 : 1.01 11248.39 43.94 0.00 0.00 11341.71 4532.91 17148.59 00:06:16.407 [2024-11-06T12:49:55.691Z] =================================================================================================================== 00:06:16.407 [2024-11-06T12:49:55.691Z] Total : 11248.39 43.94 0.00 0.00 11341.71 4532.91 17148.59 00:06:16.407 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 669879 00:06:16.407 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 669881 00:06:16.407 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:16.407 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:16.407 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:16.407 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:16.407 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:06:16.407 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:06:16.407 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:16.407 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:06:16.407 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:16.407 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:06:16.407 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:16.407 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:16.407 rmmod nvme_tcp 00:06:16.407 rmmod nvme_fabrics 00:06:16.407 rmmod nvme_keyring 00:06:16.407 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:16.407 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:06:16.407 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:06:16.407 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 669639 ']' 00:06:16.407 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 669639 00:06:16.407 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 669639 ']' 00:06:16.407 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 669639 00:06:16.407 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:06:16.407 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:16.407 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 669639 00:06:16.667 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:16.667 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:16.667 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 669639' 00:06:16.667 killing process with pid 669639 00:06:16.667 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 669639 00:06:16.667 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 669639 00:06:16.667 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:16.667 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:16.667 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:16.667 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:06:16.667 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:06:16.667 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:16.667 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:06:16.667 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:16.667 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:16.667 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:16.667 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:16.667 13:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:19.206 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:19.206 00:06:19.206 real 0m10.600s 00:06:19.206 user 0m17.224s 00:06:19.206 sys 0m5.592s 00:06:19.206 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:19.206 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:19.206 ************************************ 00:06:19.206 END TEST nvmf_bdev_io_wait 00:06:19.206 ************************************ 00:06:19.206 13:49:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:06:19.206 13:49:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:19.206 13:49:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:19.206 13:49:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:19.206 ************************************ 00:06:19.206 START TEST nvmf_queue_depth 00:06:19.206 ************************************ 00:06:19.206 13:49:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:06:19.206 * Looking for test storage... 00:06:19.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:19.206 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:19.206 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:06:19.206 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:19.206 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:19.206 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.206 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.206 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.206 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.206 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.206 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.206 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:19.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.207 --rc genhtml_branch_coverage=1 00:06:19.207 --rc genhtml_function_coverage=1 00:06:19.207 --rc genhtml_legend=1 00:06:19.207 --rc geninfo_all_blocks=1 00:06:19.207 --rc geninfo_unexecuted_blocks=1 00:06:19.207 00:06:19.207 ' 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:19.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.207 --rc genhtml_branch_coverage=1 00:06:19.207 --rc genhtml_function_coverage=1 00:06:19.207 --rc genhtml_legend=1 00:06:19.207 --rc geninfo_all_blocks=1 00:06:19.207 --rc geninfo_unexecuted_blocks=1 00:06:19.207 00:06:19.207 ' 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:19.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.207 --rc genhtml_branch_coverage=1 00:06:19.207 --rc genhtml_function_coverage=1 00:06:19.207 --rc genhtml_legend=1 00:06:19.207 --rc geninfo_all_blocks=1 00:06:19.207 --rc geninfo_unexecuted_blocks=1 00:06:19.207 00:06:19.207 ' 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:19.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.207 --rc genhtml_branch_coverage=1 00:06:19.207 --rc genhtml_function_coverage=1 00:06:19.207 --rc genhtml_legend=1 00:06:19.207 --rc geninfo_all_blocks=1 00:06:19.207 --rc geninfo_unexecuted_blocks=1 00:06:19.207 00:06:19.207 ' 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:19.207 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:06:19.207 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:06:19.208 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:19.208 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:06:19.208 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:19.208 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:19.208 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:19.208 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:19.208 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:19.208 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:19.208 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:19.208 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:19.208 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:19.208 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:19.208 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:06:19.208 13:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:06:24.486 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:24.486 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:06:24.486 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:24.486 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:24.486 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:24.486 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:24.486 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:24.486 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:06:24.486 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:24.486 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:06:24.486 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:06:24.486 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:06:24.486 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:06:24.486 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:06:24.486 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:06:24.486 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:24.486 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:24.486 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:24.486 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:24.486 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:24.486 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:24.486 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:24.486 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:24.486 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:24.486 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:24.486 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:24.486 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:24.486 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:24.486 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:24.486 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:24.486 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:24.486 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:24.486 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:24.486 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:24.486 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:24.486 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:24.487 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:24.487 Found net devices under 0000:31:00.0: cvl_0_0 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:24.487 Found net devices under 0000:31:00.1: cvl_0_1 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:24.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:24.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:06:24.487 00:06:24.487 --- 10.0.0.2 ping statistics --- 00:06:24.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:24.487 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:24.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:24.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:06:24.487 00:06:24.487 --- 10.0.0.1 ping statistics --- 00:06:24.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:24.487 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=674600 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 674600 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 674600 ']' 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:06:24.487 13:50:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:06:24.487 [2024-11-06 13:50:03.546685] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:06:24.487 [2024-11-06 13:50:03.546747] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:24.487 [2024-11-06 13:50:03.641417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.487 [2024-11-06 13:50:03.692858] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:24.487 [2024-11-06 13:50:03.692910] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:24.487 [2024-11-06 13:50:03.692920] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:24.487 [2024-11-06 13:50:03.692927] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:24.487 [2024-11-06 13:50:03.692935] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:24.487 [2024-11-06 13:50:03.693766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.426 13:50:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:25.426 13:50:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:06:25.426 13:50:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:25.426 13:50:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:25.426 13:50:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:06:25.426 13:50:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:25.426 13:50:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:25.426 13:50:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.426 13:50:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:06:25.426 [2024-11-06 13:50:04.376900] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:25.426 13:50:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.426 13:50:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:06:25.426 13:50:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.426 13:50:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:06:25.426 Malloc0 00:06:25.426 13:50:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.426 13:50:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:25.426 13:50:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.426 13:50:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:06:25.426 13:50:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.426 13:50:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:25.426 13:50:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.426 13:50:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:06:25.426 13:50:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.426 13:50:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:25.426 13:50:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.426 13:50:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:06:25.426 [2024-11-06 13:50:04.422308] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:25.426 13:50:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.426 13:50:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=674937 00:06:25.426 13:50:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:25.426 13:50:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 674937 /var/tmp/bdevperf.sock 00:06:25.426 13:50:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 674937 ']' 00:06:25.426 13:50:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:25.426 13:50:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:25.426 13:50:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:25.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:25.426 13:50:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:25.426 13:50:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:06:25.426 13:50:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:06:25.426 [2024-11-06 13:50:04.464649] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:06:25.426 [2024-11-06 13:50:04.464714] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid674937 ] 00:06:25.426 [2024-11-06 13:50:04.549142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.426 [2024-11-06 13:50:04.602290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.994 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:25.994 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:06:25.994 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:06:25.994 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.994 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:06:26.253 NVMe0n1 00:06:26.253 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.253 13:50:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:26.253 Running I/O for 10 seconds... 00:06:28.570 11264.00 IOPS, 44.00 MiB/s [2024-11-06T12:50:08.793Z] 11776.00 IOPS, 46.00 MiB/s [2024-11-06T12:50:09.766Z] 12288.00 IOPS, 48.00 MiB/s [2024-11-06T12:50:10.702Z] 12544.00 IOPS, 49.00 MiB/s [2024-11-06T12:50:11.639Z] 12694.60 IOPS, 49.59 MiB/s [2024-11-06T12:50:12.577Z] 12806.33 IOPS, 50.02 MiB/s [2024-11-06T12:50:13.516Z] 12901.29 IOPS, 50.40 MiB/s [2024-11-06T12:50:14.976Z] 12976.88 IOPS, 50.69 MiB/s [2024-11-06T12:50:15.596Z] 13075.11 IOPS, 51.07 MiB/s [2024-11-06T12:50:15.596Z] 13103.20 IOPS, 51.18 MiB/s 00:06:36.312 Latency(us) 00:06:36.312 [2024-11-06T12:50:15.596Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:36.312 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:06:36.312 Verification LBA range: start 0x0 length 0x4000 00:06:36.312 NVMe0n1 : 10.05 13139.99 51.33 0.00 0.00 77675.63 18677.76 55050.24 00:06:36.312 [2024-11-06T12:50:15.596Z] =================================================================================================================== 00:06:36.312 [2024-11-06T12:50:15.596Z] Total : 13139.99 51.33 0.00 0.00 77675.63 18677.76 55050.24 00:06:36.312 { 00:06:36.312 "results": [ 00:06:36.312 { 00:06:36.312 "job": "NVMe0n1", 00:06:36.312 "core_mask": "0x1", 00:06:36.312 "workload": "verify", 00:06:36.312 "status": "finished", 00:06:36.312 "verify_range": { 00:06:36.312 "start": 0, 00:06:36.312 "length": 16384 00:06:36.312 }, 00:06:36.312 "queue_depth": 1024, 00:06:36.312 "io_size": 4096, 00:06:36.312 "runtime": 10.049395, 00:06:36.312 "iops": 13139.994994723564, 00:06:36.312 "mibps": 51.32810544813892, 00:06:36.312 "io_failed": 0, 00:06:36.312 "io_timeout": 0, 00:06:36.312 "avg_latency_us": 77675.63465319692, 00:06:36.312 "min_latency_us": 18677.76, 00:06:36.312 "max_latency_us": 55050.24 00:06:36.312 } 00:06:36.312 ], 00:06:36.312 "core_count": 1 00:06:36.312 } 00:06:36.312 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 674937 00:06:36.312 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 674937 ']' 00:06:36.312 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 674937 00:06:36.312 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:06:36.312 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:36.312 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 674937 00:06:36.312 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:36.312 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:36.312 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 674937' 00:06:36.312 killing process with pid 674937 00:06:36.312 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 674937 00:06:36.312 Received shutdown signal, test time was about 10.000000 seconds 00:06:36.312 00:06:36.312 Latency(us) 00:06:36.312 [2024-11-06T12:50:15.596Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:36.312 [2024-11-06T12:50:15.596Z] =================================================================================================================== 00:06:36.312 [2024-11-06T12:50:15.596Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:36.312 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 674937 00:06:36.572 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:06:36.572 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:06:36.572 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:36.572 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:06:36.572 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:36.572 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:06:36.572 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:36.572 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:36.572 rmmod nvme_tcp 00:06:36.572 rmmod nvme_fabrics 00:06:36.572 rmmod nvme_keyring 00:06:36.572 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:36.572 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:06:36.572 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:06:36.572 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 674600 ']' 00:06:36.572 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 674600 00:06:36.572 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 674600 ']' 00:06:36.572 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 674600 00:06:36.572 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:06:36.572 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:36.572 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 674600 00:06:36.572 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:36.572 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:36.572 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 674600' 00:06:36.572 killing process with pid 674600 00:06:36.572 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 674600 00:06:36.572 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 674600 00:06:36.832 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:36.832 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:36.832 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:36.832 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:06:36.832 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:06:36.832 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:36.832 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:06:36.832 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:36.832 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:36.832 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:36.832 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:36.832 13:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.736 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:38.736 00:06:38.736 real 0m20.033s 00:06:38.736 user 0m24.613s 00:06:38.736 sys 0m5.345s 00:06:38.736 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:38.736 13:50:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:06:38.736 ************************************ 00:06:38.736 END TEST nvmf_queue_depth 00:06:38.736 ************************************ 00:06:38.736 13:50:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:06:38.736 13:50:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:38.736 13:50:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:38.736 13:50:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:38.995 ************************************ 00:06:38.995 START TEST nvmf_target_multipath 00:06:38.995 ************************************ 00:06:38.995 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:06:38.995 * Looking for test storage... 00:06:38.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:38.995 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:38.995 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:06:38.995 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:38.995 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:38.995 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.995 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.995 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.995 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.995 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:38.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.996 --rc genhtml_branch_coverage=1 00:06:38.996 --rc genhtml_function_coverage=1 00:06:38.996 --rc genhtml_legend=1 00:06:38.996 --rc geninfo_all_blocks=1 00:06:38.996 --rc geninfo_unexecuted_blocks=1 00:06:38.996 00:06:38.996 ' 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:38.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.996 --rc genhtml_branch_coverage=1 00:06:38.996 --rc genhtml_function_coverage=1 00:06:38.996 --rc genhtml_legend=1 00:06:38.996 --rc geninfo_all_blocks=1 00:06:38.996 --rc geninfo_unexecuted_blocks=1 00:06:38.996 00:06:38.996 ' 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:38.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.996 --rc genhtml_branch_coverage=1 00:06:38.996 --rc genhtml_function_coverage=1 00:06:38.996 --rc genhtml_legend=1 00:06:38.996 --rc geninfo_all_blocks=1 00:06:38.996 --rc geninfo_unexecuted_blocks=1 00:06:38.996 00:06:38.996 ' 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:38.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.996 --rc genhtml_branch_coverage=1 00:06:38.996 --rc genhtml_function_coverage=1 00:06:38.996 --rc genhtml_legend=1 00:06:38.996 --rc geninfo_all_blocks=1 00:06:38.996 --rc geninfo_unexecuted_blocks=1 00:06:38.996 00:06:38.996 ' 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:38.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:38.996 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:06:38.997 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:38.997 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:06:38.997 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:38.997 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:38.997 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:38.997 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:38.997 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:38.997 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:38.997 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:38.997 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.997 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:38.997 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:38.997 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:06:38.997 13:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:44.327 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:44.327 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:44.327 Found net devices under 0000:31:00.0: cvl_0_0 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:44.327 Found net devices under 0000:31:00.1: cvl_0_1 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:44.327 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:44.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:44.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:06:44.588 00:06:44.588 --- 10.0.0.2 ping statistics --- 00:06:44.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.588 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:44.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:44.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:06:44.588 00:06:44.588 --- 10.0.0.1 ping statistics --- 00:06:44.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.588 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:06:44.588 only one NIC for nvmf test 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:44.588 rmmod nvme_tcp 00:06:44.588 rmmod nvme_fabrics 00:06:44.588 rmmod nvme_keyring 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:44.588 13:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:46.501 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:46.501 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:06:46.501 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:06:46.501 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:46.501 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:06:46.501 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:46.501 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:06:46.501 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:46.501 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:46.501 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:46.501 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:06:46.501 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:06:46.501 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:06:46.501 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:46.501 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:46.501 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:46.501 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:06:46.501 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:06:46.501 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:46.761 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:06:46.761 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:46.761 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:46.761 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:46.761 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:46.761 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:46.761 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:46.761 00:06:46.761 real 0m7.777s 00:06:46.761 user 0m1.516s 00:06:46.761 sys 0m4.139s 00:06:46.761 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:46.761 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:06:46.761 ************************************ 00:06:46.761 END TEST nvmf_target_multipath 00:06:46.761 ************************************ 00:06:46.761 13:50:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:06:46.761 13:50:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:46.761 13:50:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:46.761 13:50:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:46.761 ************************************ 00:06:46.761 START TEST nvmf_zcopy 00:06:46.761 ************************************ 00:06:46.761 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:06:46.762 * Looking for test storage... 00:06:46.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:46.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.762 --rc genhtml_branch_coverage=1 00:06:46.762 --rc genhtml_function_coverage=1 00:06:46.762 --rc genhtml_legend=1 00:06:46.762 --rc geninfo_all_blocks=1 00:06:46.762 --rc geninfo_unexecuted_blocks=1 00:06:46.762 00:06:46.762 ' 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:46.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.762 --rc genhtml_branch_coverage=1 00:06:46.762 --rc genhtml_function_coverage=1 00:06:46.762 --rc genhtml_legend=1 00:06:46.762 --rc geninfo_all_blocks=1 00:06:46.762 --rc geninfo_unexecuted_blocks=1 00:06:46.762 00:06:46.762 ' 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:46.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.762 --rc genhtml_branch_coverage=1 00:06:46.762 --rc genhtml_function_coverage=1 00:06:46.762 --rc genhtml_legend=1 00:06:46.762 --rc geninfo_all_blocks=1 00:06:46.762 --rc geninfo_unexecuted_blocks=1 00:06:46.762 00:06:46.762 ' 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:46.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.762 --rc genhtml_branch_coverage=1 00:06:46.762 --rc genhtml_function_coverage=1 00:06:46.762 --rc genhtml_legend=1 00:06:46.762 --rc geninfo_all_blocks=1 00:06:46.762 --rc geninfo_unexecuted_blocks=1 00:06:46.762 00:06:46.762 ' 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:46.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:46.762 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:46.763 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:46.763 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:46.763 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:46.763 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:46.763 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:46.763 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:46.763 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:46.763 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:46.763 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:06:46.763 13:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:06:53.340 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:53.340 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:06:53.340 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:53.340 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:53.340 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:53.340 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:53.340 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:53.340 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:06:53.340 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:53.340 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:06:53.340 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:06:53.340 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:06:53.340 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:06:53.340 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:06:53.340 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:06:53.340 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:53.340 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:53.340 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:53.340 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:53.340 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:53.340 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:53.340 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:53.340 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:53.340 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:53.341 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:53.341 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:53.341 Found net devices under 0000:31:00.0: cvl_0_0 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:53.341 Found net devices under 0000:31:00.1: cvl_0_1 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:53.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:53.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:06:53.341 00:06:53.341 --- 10.0.0.2 ping statistics --- 00:06:53.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.341 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:53.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:53.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:06:53.341 00:06:53.341 --- 10.0.0.1 ping statistics --- 00:06:53.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.341 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=686297 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 686297 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 686297 ']' 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:53.341 13:50:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:06:53.341 [2024-11-06 13:50:31.663117] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:06:53.341 [2024-11-06 13:50:31.663167] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:53.341 [2024-11-06 13:50:31.747980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.341 [2024-11-06 13:50:31.792469] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:53.342 [2024-11-06 13:50:31.792513] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:53.342 [2024-11-06 13:50:31.792521] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:53.342 [2024-11-06 13:50:31.792528] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:53.342 [2024-11-06 13:50:31.792535] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:53.342 [2024-11-06 13:50:31.793235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:06:53.342 [2024-11-06 13:50:32.497093] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:06:53.342 [2024-11-06 13:50:32.513386] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:06:53.342 malloc0 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:53.342 { 00:06:53.342 "params": { 00:06:53.342 "name": "Nvme$subsystem", 00:06:53.342 "trtype": "$TEST_TRANSPORT", 00:06:53.342 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:53.342 "adrfam": "ipv4", 00:06:53.342 "trsvcid": "$NVMF_PORT", 00:06:53.342 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:53.342 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:53.342 "hdgst": ${hdgst:-false}, 00:06:53.342 "ddgst": ${ddgst:-false} 00:06:53.342 }, 00:06:53.342 "method": "bdev_nvme_attach_controller" 00:06:53.342 } 00:06:53.342 EOF 00:06:53.342 )") 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:06:53.342 13:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:53.342 "params": { 00:06:53.342 "name": "Nvme1", 00:06:53.342 "trtype": "tcp", 00:06:53.342 "traddr": "10.0.0.2", 00:06:53.342 "adrfam": "ipv4", 00:06:53.342 "trsvcid": "4420", 00:06:53.342 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:06:53.342 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:06:53.342 "hdgst": false, 00:06:53.342 "ddgst": false 00:06:53.342 }, 00:06:53.342 "method": "bdev_nvme_attach_controller" 00:06:53.342 }' 00:06:53.342 [2024-11-06 13:50:32.586758] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:06:53.342 [2024-11-06 13:50:32.586824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid686353 ] 00:06:53.602 [2024-11-06 13:50:32.671108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.602 [2024-11-06 13:50:32.724558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.862 Running I/O for 10 seconds... 00:06:56.190 8972.00 IOPS, 70.09 MiB/s [2024-11-06T12:50:36.418Z] 9444.50 IOPS, 73.79 MiB/s [2024-11-06T12:50:37.358Z] 9593.00 IOPS, 74.95 MiB/s [2024-11-06T12:50:38.298Z] 9672.25 IOPS, 75.56 MiB/s [2024-11-06T12:50:39.238Z] 9717.40 IOPS, 75.92 MiB/s [2024-11-06T12:50:40.178Z] 9747.33 IOPS, 76.15 MiB/s [2024-11-06T12:50:41.119Z] 9769.57 IOPS, 76.32 MiB/s [2024-11-06T12:50:42.501Z] 9791.62 IOPS, 76.50 MiB/s [2024-11-06T12:50:43.441Z] 9804.11 IOPS, 76.59 MiB/s [2024-11-06T12:50:43.441Z] 9814.00 IOPS, 76.67 MiB/s 00:07:04.157 Latency(us) 00:07:04.157 [2024-11-06T12:50:43.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:04.157 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:07:04.157 Verification LBA range: start 0x0 length 0x1000 00:07:04.157 Nvme1n1 : 10.01 9817.20 76.70 0.00 0.00 12996.93 2525.87 28180.48 00:07:04.157 [2024-11-06T12:50:43.441Z] =================================================================================================================== 00:07:04.157 [2024-11-06T12:50:43.442Z] Total : 9817.20 76.70 0.00 0.00 12996.93 2525.87 28180.48 00:07:04.158 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=688673 00:07:04.158 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:07:04.158 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:04.158 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:07:04.158 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:07:04.158 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:07:04.158 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:07:04.158 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:04.158 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:04.158 { 00:07:04.158 "params": { 00:07:04.158 "name": "Nvme$subsystem", 00:07:04.158 "trtype": "$TEST_TRANSPORT", 00:07:04.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:04.158 "adrfam": "ipv4", 00:07:04.158 "trsvcid": "$NVMF_PORT", 00:07:04.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:04.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:04.158 "hdgst": ${hdgst:-false}, 00:07:04.158 "ddgst": ${ddgst:-false} 00:07:04.158 }, 00:07:04.158 "method": "bdev_nvme_attach_controller" 00:07:04.158 } 00:07:04.158 EOF 00:07:04.158 )") 00:07:04.158 [2024-11-06 13:50:43.193544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.158 [2024-11-06 13:50:43.193575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.158 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:07:04.158 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:07:04.158 [2024-11-06 13:50:43.201531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.158 [2024-11-06 13:50:43.201540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.158 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:07:04.158 13:50:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:04.158 "params": { 00:07:04.158 "name": "Nvme1", 00:07:04.158 "trtype": "tcp", 00:07:04.158 "traddr": "10.0.0.2", 00:07:04.158 "adrfam": "ipv4", 00:07:04.158 "trsvcid": "4420", 00:07:04.158 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:04.158 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:04.158 "hdgst": false, 00:07:04.158 "ddgst": false 00:07:04.158 }, 00:07:04.158 "method": "bdev_nvme_attach_controller" 00:07:04.158 }' 00:07:04.158 [2024-11-06 13:50:43.209550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.158 [2024-11-06 13:50:43.209558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.158 [2024-11-06 13:50:43.217570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.158 [2024-11-06 13:50:43.217579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.158 [2024-11-06 13:50:43.222750] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:07:04.158 [2024-11-06 13:50:43.222796] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid688673 ] 00:07:04.158 [2024-11-06 13:50:43.225590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.158 [2024-11-06 13:50:43.225602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.158 [2024-11-06 13:50:43.237622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.158 [2024-11-06 13:50:43.237629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.158 [2024-11-06 13:50:43.245643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.158 [2024-11-06 13:50:43.245650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.158 [2024-11-06 13:50:43.253663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.158 [2024-11-06 13:50:43.253671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.158 [2024-11-06 13:50:43.261682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.158 [2024-11-06 13:50:43.261689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.158 [2024-11-06 13:50:43.269703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.158 [2024-11-06 13:50:43.269710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.158 [2024-11-06 13:50:43.277723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.158 [2024-11-06 13:50:43.277730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.158 [2024-11-06 13:50:43.285743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.158 [2024-11-06 13:50:43.285750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.158 [2024-11-06 13:50:43.287287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.158 [2024-11-06 13:50:43.293764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.158 [2024-11-06 13:50:43.293773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.158 [2024-11-06 13:50:43.301785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.158 [2024-11-06 13:50:43.301793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.158 [2024-11-06 13:50:43.309805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.158 [2024-11-06 13:50:43.309813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.158 [2024-11-06 13:50:43.316704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.158 [2024-11-06 13:50:43.317825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.158 [2024-11-06 13:50:43.317833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.158 [2024-11-06 13:50:43.325846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.158 [2024-11-06 13:50:43.325854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.158 [2024-11-06 13:50:43.333873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.158 [2024-11-06 13:50:43.333883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.158 [2024-11-06 13:50:43.341891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.158 [2024-11-06 13:50:43.341900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.158 [2024-11-06 13:50:43.349910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.158 [2024-11-06 13:50:43.349920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.158 [2024-11-06 13:50:43.357928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.158 [2024-11-06 13:50:43.357937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.158 [2024-11-06 13:50:43.365949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.158 [2024-11-06 13:50:43.365957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.158 [2024-11-06 13:50:43.373969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.158 [2024-11-06 13:50:43.373981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.158 [2024-11-06 13:50:43.381989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.158 [2024-11-06 13:50:43.381997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.158 [2024-11-06 13:50:43.390025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.158 [2024-11-06 13:50:43.390042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.158 [2024-11-06 13:50:43.398036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.158 [2024-11-06 13:50:43.398045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.158 [2024-11-06 13:50:43.406056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.158 [2024-11-06 13:50:43.406066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.158 [2024-11-06 13:50:43.414075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.158 [2024-11-06 13:50:43.414085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.158 [2024-11-06 13:50:43.422095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.158 [2024-11-06 13:50:43.422103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.158 [2024-11-06 13:50:43.430117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.158 [2024-11-06 13:50:43.430125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.158 [2024-11-06 13:50:43.438138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.158 [2024-11-06 13:50:43.438145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.418 [2024-11-06 13:50:43.446159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.418 [2024-11-06 13:50:43.446166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.418 [2024-11-06 13:50:43.454182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.418 [2024-11-06 13:50:43.454191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.418 [2024-11-06 13:50:43.462204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.418 [2024-11-06 13:50:43.462213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.418 [2024-11-06 13:50:43.470228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.418 [2024-11-06 13:50:43.470239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.418 [2024-11-06 13:50:43.478251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.418 [2024-11-06 13:50:43.478259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.418 [2024-11-06 13:50:43.486277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.418 [2024-11-06 13:50:43.486291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.418 [2024-11-06 13:50:43.494291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.418 [2024-11-06 13:50:43.494299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.418 Running I/O for 5 seconds... 00:07:04.418 [2024-11-06 13:50:43.502306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.418 [2024-11-06 13:50:43.502313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.418 [2024-11-06 13:50:43.513229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.418 [2024-11-06 13:50:43.513251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.418 [2024-11-06 13:50:43.522348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.418 [2024-11-06 13:50:43.522363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.418 [2024-11-06 13:50:43.531173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.418 [2024-11-06 13:50:43.531192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.418 [2024-11-06 13:50:43.540584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.418 [2024-11-06 13:50:43.540599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.418 [2024-11-06 13:50:43.549136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.418 [2024-11-06 13:50:43.549151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.418 [2024-11-06 13:50:43.558178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.418 [2024-11-06 13:50:43.558193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.418 [2024-11-06 13:50:43.566765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.418 [2024-11-06 13:50:43.566780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.418 [2024-11-06 13:50:43.575253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.418 [2024-11-06 13:50:43.575269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.418 [2024-11-06 13:50:43.584042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.418 [2024-11-06 13:50:43.584057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.418 [2024-11-06 13:50:43.592598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.418 [2024-11-06 13:50:43.592613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.418 [2024-11-06 13:50:43.601921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.418 [2024-11-06 13:50:43.601936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.418 [2024-11-06 13:50:43.610400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.418 [2024-11-06 13:50:43.610415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.418 [2024-11-06 13:50:43.618971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.418 [2024-11-06 13:50:43.618986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.418 [2024-11-06 13:50:43.626906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.418 [2024-11-06 13:50:43.626921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.418 [2024-11-06 13:50:43.635874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.418 [2024-11-06 13:50:43.635888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.418 [2024-11-06 13:50:43.644260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.418 [2024-11-06 13:50:43.644275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.418 [2024-11-06 13:50:43.652712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.418 [2024-11-06 13:50:43.652726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.418 [2024-11-06 13:50:43.661577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.418 [2024-11-06 13:50:43.661592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.418 [2024-11-06 13:50:43.670528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.418 [2024-11-06 13:50:43.670543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.418 [2024-11-06 13:50:43.679189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.418 [2024-11-06 13:50:43.679204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.418 [2024-11-06 13:50:43.687862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.418 [2024-11-06 13:50:43.687877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.418 [2024-11-06 13:50:43.696640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.418 [2024-11-06 13:50:43.696658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.677 [2024-11-06 13:50:43.705412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.677 [2024-11-06 13:50:43.705427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.677 [2024-11-06 13:50:43.714106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.677 [2024-11-06 13:50:43.714121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.677 [2024-11-06 13:50:43.722978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.677 [2024-11-06 13:50:43.722993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.677 [2024-11-06 13:50:43.731761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.677 [2024-11-06 13:50:43.731775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.677 [2024-11-06 13:50:43.740156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.677 [2024-11-06 13:50:43.740171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.677 [2024-11-06 13:50:43.748856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.677 [2024-11-06 13:50:43.748871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.677 [2024-11-06 13:50:43.757674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.677 [2024-11-06 13:50:43.757689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.677 [2024-11-06 13:50:43.766437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.677 [2024-11-06 13:50:43.766451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.677 [2024-11-06 13:50:43.774833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.677 [2024-11-06 13:50:43.774848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.677 [2024-11-06 13:50:43.783695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.677 [2024-11-06 13:50:43.783710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.677 [2024-11-06 13:50:43.792937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.677 [2024-11-06 13:50:43.792951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.677 [2024-11-06 13:50:43.801728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.677 [2024-11-06 13:50:43.801742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.677 [2024-11-06 13:50:43.810655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.677 [2024-11-06 13:50:43.810669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.678 [2024-11-06 13:50:43.819622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.678 [2024-11-06 13:50:43.819636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.678 [2024-11-06 13:50:43.828427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.678 [2024-11-06 13:50:43.828441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.678 [2024-11-06 13:50:43.837308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.678 [2024-11-06 13:50:43.837323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.678 [2024-11-06 13:50:43.846113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.678 [2024-11-06 13:50:43.846127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.678 [2024-11-06 13:50:43.854822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.678 [2024-11-06 13:50:43.854837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.678 [2024-11-06 13:50:43.864080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.678 [2024-11-06 13:50:43.864094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.678 [2024-11-06 13:50:43.872474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.678 [2024-11-06 13:50:43.872489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.678 [2024-11-06 13:50:43.881570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.678 [2024-11-06 13:50:43.881585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.678 [2024-11-06 13:50:43.890215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.678 [2024-11-06 13:50:43.890230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.678 [2024-11-06 13:50:43.898500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.678 [2024-11-06 13:50:43.898514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.678 [2024-11-06 13:50:43.907250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.678 [2024-11-06 13:50:43.907264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.678 [2024-11-06 13:50:43.915854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.678 [2024-11-06 13:50:43.915869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.678 [2024-11-06 13:50:43.924917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.678 [2024-11-06 13:50:43.924932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.678 [2024-11-06 13:50:43.934358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.678 [2024-11-06 13:50:43.934373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.678 [2024-11-06 13:50:43.942797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.678 [2024-11-06 13:50:43.942811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.678 [2024-11-06 13:50:43.951980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.678 [2024-11-06 13:50:43.951994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.678 [2024-11-06 13:50:43.960879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.678 [2024-11-06 13:50:43.960893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.937 [2024-11-06 13:50:43.969785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.937 [2024-11-06 13:50:43.969800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.937 [2024-11-06 13:50:43.978582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.937 [2024-11-06 13:50:43.978596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.937 [2024-11-06 13:50:43.987545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.937 [2024-11-06 13:50:43.987559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.937 [2024-11-06 13:50:43.996656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.937 [2024-11-06 13:50:43.996670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.937 [2024-11-06 13:50:44.005741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.937 [2024-11-06 13:50:44.005758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.937 [2024-11-06 13:50:44.014557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.937 [2024-11-06 13:50:44.014571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.937 [2024-11-06 13:50:44.023261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.937 [2024-11-06 13:50:44.023275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.937 [2024-11-06 13:50:44.031943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.937 [2024-11-06 13:50:44.031957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.937 [2024-11-06 13:50:44.040874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.937 [2024-11-06 13:50:44.040889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.937 [2024-11-06 13:50:44.049559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.937 [2024-11-06 13:50:44.049573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.937 [2024-11-06 13:50:44.058655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.937 [2024-11-06 13:50:44.058669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.937 [2024-11-06 13:50:44.067447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.937 [2024-11-06 13:50:44.067461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.937 [2024-11-06 13:50:44.076757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.937 [2024-11-06 13:50:44.076771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.937 [2024-11-06 13:50:44.085503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.937 [2024-11-06 13:50:44.085517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.937 [2024-11-06 13:50:44.094575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.937 [2024-11-06 13:50:44.094590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.937 [2024-11-06 13:50:44.103286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.937 [2024-11-06 13:50:44.103300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.937 [2024-11-06 13:50:44.112031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.938 [2024-11-06 13:50:44.112045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.938 [2024-11-06 13:50:44.120360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.938 [2024-11-06 13:50:44.120374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.938 [2024-11-06 13:50:44.129285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.938 [2024-11-06 13:50:44.129299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.938 [2024-11-06 13:50:44.137674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.938 [2024-11-06 13:50:44.137688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.938 [2024-11-06 13:50:44.147159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.938 [2024-11-06 13:50:44.147173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.938 [2024-11-06 13:50:44.155022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.938 [2024-11-06 13:50:44.155036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.938 [2024-11-06 13:50:44.163909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.938 [2024-11-06 13:50:44.163923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.938 [2024-11-06 13:50:44.173207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.938 [2024-11-06 13:50:44.173222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.938 [2024-11-06 13:50:44.181455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.938 [2024-11-06 13:50:44.181469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.938 [2024-11-06 13:50:44.190166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.938 [2024-11-06 13:50:44.190180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.938 [2024-11-06 13:50:44.198483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.938 [2024-11-06 13:50:44.198497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.938 [2024-11-06 13:50:44.207887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.938 [2024-11-06 13:50:44.207901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:04.938 [2024-11-06 13:50:44.216669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:04.938 [2024-11-06 13:50:44.216682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.198 [2024-11-06 13:50:44.225266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.198 [2024-11-06 13:50:44.225280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.198 [2024-11-06 13:50:44.234629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.198 [2024-11-06 13:50:44.234643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.198 [2024-11-06 13:50:44.243447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.198 [2024-11-06 13:50:44.243462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.198 [2024-11-06 13:50:44.252250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.198 [2024-11-06 13:50:44.252264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.198 [2024-11-06 13:50:44.260997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.198 [2024-11-06 13:50:44.261011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.198 [2024-11-06 13:50:44.270188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.198 [2024-11-06 13:50:44.270203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.198 [2024-11-06 13:50:44.279182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.198 [2024-11-06 13:50:44.279197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.198 [2024-11-06 13:50:44.288535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.198 [2024-11-06 13:50:44.288550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.198 [2024-11-06 13:50:44.297652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.198 [2024-11-06 13:50:44.297666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.198 [2024-11-06 13:50:44.306949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.198 [2024-11-06 13:50:44.306963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.198 [2024-11-06 13:50:44.315802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.198 [2024-11-06 13:50:44.315817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.198 [2024-11-06 13:50:44.324863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.198 [2024-11-06 13:50:44.324878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.198 [2024-11-06 13:50:44.333579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.198 [2024-11-06 13:50:44.333593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.198 [2024-11-06 13:50:44.342928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.198 [2024-11-06 13:50:44.342943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.198 [2024-11-06 13:50:44.351644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.198 [2024-11-06 13:50:44.351658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.198 [2024-11-06 13:50:44.360721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.198 [2024-11-06 13:50:44.360735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.198 [2024-11-06 13:50:44.368959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.198 [2024-11-06 13:50:44.368973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.198 [2024-11-06 13:50:44.377657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.198 [2024-11-06 13:50:44.377672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.198 [2024-11-06 13:50:44.386719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.198 [2024-11-06 13:50:44.386734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.198 [2024-11-06 13:50:44.395583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.198 [2024-11-06 13:50:44.395597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.198 [2024-11-06 13:50:44.403997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.198 [2024-11-06 13:50:44.404012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.198 [2024-11-06 13:50:44.412922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.198 [2024-11-06 13:50:44.412936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.198 [2024-11-06 13:50:44.422155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.198 [2024-11-06 13:50:44.422170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.198 [2024-11-06 13:50:44.430893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.198 [2024-11-06 13:50:44.430907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.198 [2024-11-06 13:50:44.439893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.198 [2024-11-06 13:50:44.439907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.198 [2024-11-06 13:50:44.448575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.198 [2024-11-06 13:50:44.448589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.198 [2024-11-06 13:50:44.458131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.198 [2024-11-06 13:50:44.458145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.198 [2024-11-06 13:50:44.466560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.198 [2024-11-06 13:50:44.466574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.198 [2024-11-06 13:50:44.474834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.198 [2024-11-06 13:50:44.474848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.458 [2024-11-06 13:50:44.483584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.458 [2024-11-06 13:50:44.483598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.458 [2024-11-06 13:50:44.491955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.458 [2024-11-06 13:50:44.491969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.458 [2024-11-06 13:50:44.500477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.458 [2024-11-06 13:50:44.500492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.458 19332.00 IOPS, 151.03 MiB/s [2024-11-06T12:50:44.742Z] [2024-11-06 13:50:44.508458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.458 [2024-11-06 13:50:44.508473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.458 [2024-11-06 13:50:44.517554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.458 [2024-11-06 13:50:44.517568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.458 [2024-11-06 13:50:44.526555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.458 [2024-11-06 13:50:44.526579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.458 [2024-11-06 13:50:44.535209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.458 [2024-11-06 13:50:44.535223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.458 [2024-11-06 13:50:44.544407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.458 [2024-11-06 13:50:44.544421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.458 [2024-11-06 13:50:44.553323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.458 [2024-11-06 13:50:44.553337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.458 [2024-11-06 13:50:44.562031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.458 [2024-11-06 13:50:44.562045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.458 [2024-11-06 13:50:44.571195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.458 [2024-11-06 13:50:44.571209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.458 [2024-11-06 13:50:44.580112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.458 [2024-11-06 13:50:44.580126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.458 [2024-11-06 13:50:44.588667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.458 [2024-11-06 13:50:44.588681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.458 [2024-11-06 13:50:44.597716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.458 [2024-11-06 13:50:44.597730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.458 [2024-11-06 13:50:44.606865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.458 [2024-11-06 13:50:44.606879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.458 [2024-11-06 13:50:44.615603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.458 [2024-11-06 13:50:44.615618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.458 [2024-11-06 13:50:44.625017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.458 [2024-11-06 13:50:44.625032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.458 [2024-11-06 13:50:44.634397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.458 [2024-11-06 13:50:44.634411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.458 [2024-11-06 13:50:44.643541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.458 [2024-11-06 13:50:44.643555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.458 [2024-11-06 13:50:44.652265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.458 [2024-11-06 13:50:44.652279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.458 [2024-11-06 13:50:44.661034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.458 [2024-11-06 13:50:44.661049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.458 [2024-11-06 13:50:44.669950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.458 [2024-11-06 13:50:44.669964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.458 [2024-11-06 13:50:44.678611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.458 [2024-11-06 13:50:44.678626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.458 [2024-11-06 13:50:44.687383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.458 [2024-11-06 13:50:44.687397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.458 [2024-11-06 13:50:44.696337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.458 [2024-11-06 13:50:44.696355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.458 [2024-11-06 13:50:44.705387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.458 [2024-11-06 13:50:44.705401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.458 [2024-11-06 13:50:44.714130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.458 [2024-11-06 13:50:44.714144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.458 [2024-11-06 13:50:44.722869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.458 [2024-11-06 13:50:44.722884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.458 [2024-11-06 13:50:44.731301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.458 [2024-11-06 13:50:44.731317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.458 [2024-11-06 13:50:44.739717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.458 [2024-11-06 13:50:44.739731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.717 [2024-11-06 13:50:44.748824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.717 [2024-11-06 13:50:44.748839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.717 [2024-11-06 13:50:44.757700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.717 [2024-11-06 13:50:44.757715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.717 [2024-11-06 13:50:44.766447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.717 [2024-11-06 13:50:44.766462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.717 [2024-11-06 13:50:44.775410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.717 [2024-11-06 13:50:44.775424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.717 [2024-11-06 13:50:44.784749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.717 [2024-11-06 13:50:44.784764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.717 [2024-11-06 13:50:44.793432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.717 [2024-11-06 13:50:44.793446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.717 [2024-11-06 13:50:44.802524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.717 [2024-11-06 13:50:44.802538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.717 [2024-11-06 13:50:44.810972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.717 [2024-11-06 13:50:44.810986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.717 [2024-11-06 13:50:44.820285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.717 [2024-11-06 13:50:44.820300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.717 [2024-11-06 13:50:44.828984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.717 [2024-11-06 13:50:44.828998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.718 [2024-11-06 13:50:44.838005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.718 [2024-11-06 13:50:44.838019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.718 [2024-11-06 13:50:44.846790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.718 [2024-11-06 13:50:44.846804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.718 [2024-11-06 13:50:44.856315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.718 [2024-11-06 13:50:44.856329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.718 [2024-11-06 13:50:44.864211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.718 [2024-11-06 13:50:44.864228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.718 [2024-11-06 13:50:44.873515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.718 [2024-11-06 13:50:44.873529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.718 [2024-11-06 13:50:44.881872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.718 [2024-11-06 13:50:44.881886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.718 [2024-11-06 13:50:44.891123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.718 [2024-11-06 13:50:44.891137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.718 [2024-11-06 13:50:44.899967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.718 [2024-11-06 13:50:44.899981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.718 [2024-11-06 13:50:44.908367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.718 [2024-11-06 13:50:44.908381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.718 [2024-11-06 13:50:44.917776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.718 [2024-11-06 13:50:44.917791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.718 [2024-11-06 13:50:44.926034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.718 [2024-11-06 13:50:44.926049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.718 [2024-11-06 13:50:44.935073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.718 [2024-11-06 13:50:44.935088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.718 [2024-11-06 13:50:44.943886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.718 [2024-11-06 13:50:44.943901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.718 [2024-11-06 13:50:44.952608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.718 [2024-11-06 13:50:44.952623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.718 [2024-11-06 13:50:44.961505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.718 [2024-11-06 13:50:44.961520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.718 [2024-11-06 13:50:44.970312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.718 [2024-11-06 13:50:44.970327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.718 [2024-11-06 13:50:44.979640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.718 [2024-11-06 13:50:44.979655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.718 [2024-11-06 13:50:44.988680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.718 [2024-11-06 13:50:44.988695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.718 [2024-11-06 13:50:44.997056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.718 [2024-11-06 13:50:44.997071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.978 [2024-11-06 13:50:45.006476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.978 [2024-11-06 13:50:45.006491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.978 [2024-11-06 13:50:45.015307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.978 [2024-11-06 13:50:45.015322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.978 [2024-11-06 13:50:45.024251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.978 [2024-11-06 13:50:45.024265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.978 [2024-11-06 13:50:45.033647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.978 [2024-11-06 13:50:45.033662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.978 [2024-11-06 13:50:45.042372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.978 [2024-11-06 13:50:45.042387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.978 [2024-11-06 13:50:45.051526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.978 [2024-11-06 13:50:45.051541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.978 [2024-11-06 13:50:45.060260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.978 [2024-11-06 13:50:45.060275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.978 [2024-11-06 13:50:45.069038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.978 [2024-11-06 13:50:45.069054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.978 [2024-11-06 13:50:45.078087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.978 [2024-11-06 13:50:45.078102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.978 [2024-11-06 13:50:45.087303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.978 [2024-11-06 13:50:45.087317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.978 [2024-11-06 13:50:45.095941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.978 [2024-11-06 13:50:45.095955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.978 [2024-11-06 13:50:45.105057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.978 [2024-11-06 13:50:45.105072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.978 [2024-11-06 13:50:45.114093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.978 [2024-11-06 13:50:45.114107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.978 [2024-11-06 13:50:45.122657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.978 [2024-11-06 13:50:45.122672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.978 [2024-11-06 13:50:45.131358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.978 [2024-11-06 13:50:45.131372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.978 [2024-11-06 13:50:45.140644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.978 [2024-11-06 13:50:45.140659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.978 [2024-11-06 13:50:45.149596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.978 [2024-11-06 13:50:45.149611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.978 [2024-11-06 13:50:45.158821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.978 [2024-11-06 13:50:45.158835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.978 [2024-11-06 13:50:45.167174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.978 [2024-11-06 13:50:45.167188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.978 [2024-11-06 13:50:45.175892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.978 [2024-11-06 13:50:45.175906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.978 [2024-11-06 13:50:45.184739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.978 [2024-11-06 13:50:45.184754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.978 [2024-11-06 13:50:45.193416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.978 [2024-11-06 13:50:45.193430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.978 [2024-11-06 13:50:45.202654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.978 [2024-11-06 13:50:45.202669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.978 [2024-11-06 13:50:45.211529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.978 [2024-11-06 13:50:45.211544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.978 [2024-11-06 13:50:45.220743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.978 [2024-11-06 13:50:45.220758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.978 [2024-11-06 13:50:45.229666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.978 [2024-11-06 13:50:45.229680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.978 [2024-11-06 13:50:45.238902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.978 [2024-11-06 13:50:45.238917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.978 [2024-11-06 13:50:45.247801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.978 [2024-11-06 13:50:45.247815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:05.978 [2024-11-06 13:50:45.256948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:05.978 [2024-11-06 13:50:45.256962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.237 [2024-11-06 13:50:45.265956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.237 [2024-11-06 13:50:45.265970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.237 [2024-11-06 13:50:45.275284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.237 [2024-11-06 13:50:45.275299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.237 [2024-11-06 13:50:45.283994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.237 [2024-11-06 13:50:45.284008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.237 [2024-11-06 13:50:45.293038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.237 [2024-11-06 13:50:45.293052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.237 [2024-11-06 13:50:45.301864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.237 [2024-11-06 13:50:45.301878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.237 [2024-11-06 13:50:45.310327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.237 [2024-11-06 13:50:45.310342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.237 [2024-11-06 13:50:45.318633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.237 [2024-11-06 13:50:45.318647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.237 [2024-11-06 13:50:45.327359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.237 [2024-11-06 13:50:45.327373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.237 [2024-11-06 13:50:45.335757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.237 [2024-11-06 13:50:45.335773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.237 [2024-11-06 13:50:45.344143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.237 [2024-11-06 13:50:45.344157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.237 [2024-11-06 13:50:45.353111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.237 [2024-11-06 13:50:45.353125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.237 [2024-11-06 13:50:45.362422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.237 [2024-11-06 13:50:45.362436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.237 [2024-11-06 13:50:45.371560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.237 [2024-11-06 13:50:45.371575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.237 [2024-11-06 13:50:45.379966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.237 [2024-11-06 13:50:45.379981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.237 [2024-11-06 13:50:45.389193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.237 [2024-11-06 13:50:45.389207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.237 [2024-11-06 13:50:45.398027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.237 [2024-11-06 13:50:45.398041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.237 [2024-11-06 13:50:45.407020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.237 [2024-11-06 13:50:45.407035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.237 [2024-11-06 13:50:45.415882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.237 [2024-11-06 13:50:45.415896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.237 [2024-11-06 13:50:45.424272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.237 [2024-11-06 13:50:45.424286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.237 [2024-11-06 13:50:45.434049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.237 [2024-11-06 13:50:45.434063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.237 [2024-11-06 13:50:45.442524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.237 [2024-11-06 13:50:45.442539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.237 [2024-11-06 13:50:45.451249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.237 [2024-11-06 13:50:45.451263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.237 [2024-11-06 13:50:45.460638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.237 [2024-11-06 13:50:45.460652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.237 [2024-11-06 13:50:45.469374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.238 [2024-11-06 13:50:45.469388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.238 [2024-11-06 13:50:45.478169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.238 [2024-11-06 13:50:45.478183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.238 [2024-11-06 13:50:45.487118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.238 [2024-11-06 13:50:45.487132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.238 [2024-11-06 13:50:45.495784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.238 [2024-11-06 13:50:45.495798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.238 19465.50 IOPS, 152.07 MiB/s [2024-11-06T12:50:45.522Z] [2024-11-06 13:50:45.504437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.238 [2024-11-06 13:50:45.504452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.238 [2024-11-06 13:50:45.513150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.238 [2024-11-06 13:50:45.513164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.497 [2024-11-06 13:50:45.521816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.497 [2024-11-06 13:50:45.521830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.497 [2024-11-06 13:50:45.530145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.497 [2024-11-06 13:50:45.530163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.497 [2024-11-06 13:50:45.539022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.497 [2024-11-06 13:50:45.539037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.497 [2024-11-06 13:50:45.548318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.497 [2024-11-06 13:50:45.548332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.497 [2024-11-06 13:50:45.557069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.497 [2024-11-06 13:50:45.557084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.497 [2024-11-06 13:50:45.565892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.497 [2024-11-06 13:50:45.565906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.497 [2024-11-06 13:50:45.574930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.497 [2024-11-06 13:50:45.574944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.497 [2024-11-06 13:50:45.583779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.497 [2024-11-06 13:50:45.583793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.497 [2024-11-06 13:50:45.593045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.497 [2024-11-06 13:50:45.593060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.497 [2024-11-06 13:50:45.601987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.497 [2024-11-06 13:50:45.602001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.497 [2024-11-06 13:50:45.610375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.497 [2024-11-06 13:50:45.610391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.497 [2024-11-06 13:50:45.619552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.497 [2024-11-06 13:50:45.619566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.497 [2024-11-06 13:50:45.628172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.497 [2024-11-06 13:50:45.628187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.497 [2024-11-06 13:50:45.636977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.497 [2024-11-06 13:50:45.636991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.497 [2024-11-06 13:50:45.646010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.497 [2024-11-06 13:50:45.646025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.497 [2024-11-06 13:50:45.654725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.497 [2024-11-06 13:50:45.654740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.497 [2024-11-06 13:50:45.663421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.497 [2024-11-06 13:50:45.663435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.497 [2024-11-06 13:50:45.672559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.497 [2024-11-06 13:50:45.672573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.497 [2024-11-06 13:50:45.681318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.497 [2024-11-06 13:50:45.681333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.497 [2024-11-06 13:50:45.690614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.497 [2024-11-06 13:50:45.690629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.497 [2024-11-06 13:50:45.698945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.497 [2024-11-06 13:50:45.698963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.497 [2024-11-06 13:50:45.707860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.497 [2024-11-06 13:50:45.707875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.497 [2024-11-06 13:50:45.716628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.497 [2024-11-06 13:50:45.716643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.497 [2024-11-06 13:50:45.725867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.498 [2024-11-06 13:50:45.725882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.498 [2024-11-06 13:50:45.735055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.498 [2024-11-06 13:50:45.735070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.498 [2024-11-06 13:50:45.743649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.498 [2024-11-06 13:50:45.743663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.498 [2024-11-06 13:50:45.752704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.498 [2024-11-06 13:50:45.752718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.498 [2024-11-06 13:50:45.761675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.498 [2024-11-06 13:50:45.761690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.498 [2024-11-06 13:50:45.770947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.498 [2024-11-06 13:50:45.770961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.498 [2024-11-06 13:50:45.779940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.498 [2024-11-06 13:50:45.779954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.756 [2024-11-06 13:50:45.788370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.756 [2024-11-06 13:50:45.788384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.756 [2024-11-06 13:50:45.797366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.756 [2024-11-06 13:50:45.797380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.756 [2024-11-06 13:50:45.806353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.756 [2024-11-06 13:50:45.806367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.756 [2024-11-06 13:50:45.815442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.756 [2024-11-06 13:50:45.815456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.756 [2024-11-06 13:50:45.823899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.756 [2024-11-06 13:50:45.823913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.756 [2024-11-06 13:50:45.832371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.756 [2024-11-06 13:50:45.832386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.756 [2024-11-06 13:50:45.840856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.756 [2024-11-06 13:50:45.840870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.756 [2024-11-06 13:50:45.850040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.756 [2024-11-06 13:50:45.850055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.756 [2024-11-06 13:50:45.858879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.756 [2024-11-06 13:50:45.858893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.756 [2024-11-06 13:50:45.867128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.756 [2024-11-06 13:50:45.867146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.756 [2024-11-06 13:50:45.876010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.756 [2024-11-06 13:50:45.876025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.756 [2024-11-06 13:50:45.884833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.756 [2024-11-06 13:50:45.884848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.756 [2024-11-06 13:50:45.893530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.756 [2024-11-06 13:50:45.893544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.756 [2024-11-06 13:50:45.902084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.756 [2024-11-06 13:50:45.902098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.756 [2024-11-06 13:50:45.911122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.756 [2024-11-06 13:50:45.911137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.756 [2024-11-06 13:50:45.919925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.756 [2024-11-06 13:50:45.919939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.756 [2024-11-06 13:50:45.929191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.756 [2024-11-06 13:50:45.929205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.756 [2024-11-06 13:50:45.937822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.756 [2024-11-06 13:50:45.937836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.756 [2024-11-06 13:50:45.946822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.756 [2024-11-06 13:50:45.946837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.756 [2024-11-06 13:50:45.956304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.756 [2024-11-06 13:50:45.956319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.756 [2024-11-06 13:50:45.965040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.756 [2024-11-06 13:50:45.965055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.756 [2024-11-06 13:50:45.973711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.756 [2024-11-06 13:50:45.973726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.756 [2024-11-06 13:50:45.982508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.756 [2024-11-06 13:50:45.982522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.756 [2024-11-06 13:50:45.991640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.756 [2024-11-06 13:50:45.991654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.756 [2024-11-06 13:50:46.000644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.756 [2024-11-06 13:50:46.000658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.756 [2024-11-06 13:50:46.009063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.756 [2024-11-06 13:50:46.009078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.756 [2024-11-06 13:50:46.018266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.756 [2024-11-06 13:50:46.018281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.756 [2024-11-06 13:50:46.027113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.756 [2024-11-06 13:50:46.027128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:06.756 [2024-11-06 13:50:46.035930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:06.756 [2024-11-06 13:50:46.035948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.016 [2024-11-06 13:50:46.044647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.016 [2024-11-06 13:50:46.044663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.016 [2024-11-06 13:50:46.053321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.016 [2024-11-06 13:50:46.053335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.016 [2024-11-06 13:50:46.062437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.016 [2024-11-06 13:50:46.062452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.016 [2024-11-06 13:50:46.071280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.016 [2024-11-06 13:50:46.071295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.016 [2024-11-06 13:50:46.080660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.016 [2024-11-06 13:50:46.080674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.016 [2024-11-06 13:50:46.089333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.016 [2024-11-06 13:50:46.089347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.016 [2024-11-06 13:50:46.097985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.016 [2024-11-06 13:50:46.098000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.016 [2024-11-06 13:50:46.106613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.016 [2024-11-06 13:50:46.106628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.016 [2024-11-06 13:50:46.115361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.016 [2024-11-06 13:50:46.115375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.016 [2024-11-06 13:50:46.123814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.016 [2024-11-06 13:50:46.123829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.016 [2024-11-06 13:50:46.132522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.016 [2024-11-06 13:50:46.132536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.016 [2024-11-06 13:50:46.141017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.016 [2024-11-06 13:50:46.141031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.016 [2024-11-06 13:50:46.149814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.016 [2024-11-06 13:50:46.149829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.016 [2024-11-06 13:50:46.158496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.016 [2024-11-06 13:50:46.158510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.016 [2024-11-06 13:50:46.166893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.016 [2024-11-06 13:50:46.166907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.016 [2024-11-06 13:50:46.176090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.016 [2024-11-06 13:50:46.176105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.016 [2024-11-06 13:50:46.184937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.016 [2024-11-06 13:50:46.184951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.016 [2024-11-06 13:50:46.194076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.016 [2024-11-06 13:50:46.194091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.016 [2024-11-06 13:50:46.202425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.016 [2024-11-06 13:50:46.202439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.016 [2024-11-06 13:50:46.211284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.016 [2024-11-06 13:50:46.211299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.016 [2024-11-06 13:50:46.220810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.016 [2024-11-06 13:50:46.220825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.016 [2024-11-06 13:50:46.229320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.016 [2024-11-06 13:50:46.229334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.016 [2024-11-06 13:50:46.237860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.016 [2024-11-06 13:50:46.237875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.016 [2024-11-06 13:50:46.247058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.016 [2024-11-06 13:50:46.247073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.016 [2024-11-06 13:50:46.255747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.016 [2024-11-06 13:50:46.255762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.016 [2024-11-06 13:50:46.264568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.016 [2024-11-06 13:50:46.264583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.016 [2024-11-06 13:50:46.273117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.016 [2024-11-06 13:50:46.273132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.016 [2024-11-06 13:50:46.282132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.016 [2024-11-06 13:50:46.282147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.016 [2024-11-06 13:50:46.290916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.016 [2024-11-06 13:50:46.290930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.276 [2024-11-06 13:50:46.300155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.276 [2024-11-06 13:50:46.300169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.276 [2024-11-06 13:50:46.309096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.276 [2024-11-06 13:50:46.309111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.276 [2024-11-06 13:50:46.318459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.276 [2024-11-06 13:50:46.318474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.276 [2024-11-06 13:50:46.327037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.276 [2024-11-06 13:50:46.327051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.276 [2024-11-06 13:50:46.335891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.276 [2024-11-06 13:50:46.335907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.276 [2024-11-06 13:50:46.344482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.276 [2024-11-06 13:50:46.344496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.276 [2024-11-06 13:50:46.352955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.276 [2024-11-06 13:50:46.352970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.276 [2024-11-06 13:50:46.361879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.276 [2024-11-06 13:50:46.361893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.276 [2024-11-06 13:50:46.370150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.276 [2024-11-06 13:50:46.370165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.276 [2024-11-06 13:50:46.378499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.276 [2024-11-06 13:50:46.378513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.276 [2024-11-06 13:50:46.387454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.276 [2024-11-06 13:50:46.387468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.276 [2024-11-06 13:50:46.396131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.276 [2024-11-06 13:50:46.396145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.276 [2024-11-06 13:50:46.404976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.276 [2024-11-06 13:50:46.404991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.276 [2024-11-06 13:50:46.413901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.276 [2024-11-06 13:50:46.413916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.276 [2024-11-06 13:50:46.422788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.276 [2024-11-06 13:50:46.422803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.276 [2024-11-06 13:50:46.431173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.276 [2024-11-06 13:50:46.431188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.276 [2024-11-06 13:50:46.439860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.276 [2024-11-06 13:50:46.439874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.276 [2024-11-06 13:50:46.449161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.276 [2024-11-06 13:50:46.449176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.276 [2024-11-06 13:50:46.458185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.276 [2024-11-06 13:50:46.458200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.276 [2024-11-06 13:50:46.466904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.276 [2024-11-06 13:50:46.466919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.276 [2024-11-06 13:50:46.475804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.276 [2024-11-06 13:50:46.475819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.276 [2024-11-06 13:50:46.484521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.276 [2024-11-06 13:50:46.484537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.276 [2024-11-06 13:50:46.493286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.276 [2024-11-06 13:50:46.493301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.276 [2024-11-06 13:50:46.502204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.276 [2024-11-06 13:50:46.502219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.276 19464.00 IOPS, 152.06 MiB/s [2024-11-06T12:50:46.560Z] [2024-11-06 13:50:46.511429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.276 [2024-11-06 13:50:46.511444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.276 [2024-11-06 13:50:46.520504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.276 [2024-11-06 13:50:46.520519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.276 [2024-11-06 13:50:46.529393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.276 [2024-11-06 13:50:46.529417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.276 [2024-11-06 13:50:46.538346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.276 [2024-11-06 13:50:46.538360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.276 [2024-11-06 13:50:46.546849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.276 [2024-11-06 13:50:46.546866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.276 [2024-11-06 13:50:46.555581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.276 [2024-11-06 13:50:46.555596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.535 [2024-11-06 13:50:46.564355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.535 [2024-11-06 13:50:46.564369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.536 [2024-11-06 13:50:46.572982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.536 [2024-11-06 13:50:46.572997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.536 [2024-11-06 13:50:46.581170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.536 [2024-11-06 13:50:46.581185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.536 [2024-11-06 13:50:46.589915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.536 [2024-11-06 13:50:46.589929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.536 [2024-11-06 13:50:46.598545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.536 [2024-11-06 13:50:46.598559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.536 [2024-11-06 13:50:46.607845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.536 [2024-11-06 13:50:46.607859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.536 [2024-11-06 13:50:46.617204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.536 [2024-11-06 13:50:46.617219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.536 [2024-11-06 13:50:46.625833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.536 [2024-11-06 13:50:46.625847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.536 [2024-11-06 13:50:46.635139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.536 [2024-11-06 13:50:46.635154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.536 [2024-11-06 13:50:46.643645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.536 [2024-11-06 13:50:46.643673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.536 [2024-11-06 13:50:46.652315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.536 [2024-11-06 13:50:46.652329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.536 [2024-11-06 13:50:46.660835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.536 [2024-11-06 13:50:46.660849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.536 [2024-11-06 13:50:46.669586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.536 [2024-11-06 13:50:46.669600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.536 [2024-11-06 13:50:46.678555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.536 [2024-11-06 13:50:46.678571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.536 [2024-11-06 13:50:46.687278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.536 [2024-11-06 13:50:46.687293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.536 [2024-11-06 13:50:46.696098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.536 [2024-11-06 13:50:46.696115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.536 [2024-11-06 13:50:46.704551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.536 [2024-11-06 13:50:46.704566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.536 [2024-11-06 13:50:46.713921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.536 [2024-11-06 13:50:46.713935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.536 [2024-11-06 13:50:46.723012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.536 [2024-11-06 13:50:46.723027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.536 [2024-11-06 13:50:46.731710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.536 [2024-11-06 13:50:46.731725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.536 [2024-11-06 13:50:46.740532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.536 [2024-11-06 13:50:46.740547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.536 [2024-11-06 13:50:46.749421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.536 [2024-11-06 13:50:46.749435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.536 [2024-11-06 13:50:46.758338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.536 [2024-11-06 13:50:46.758352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.536 [2024-11-06 13:50:46.766600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.536 [2024-11-06 13:50:46.766614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.536 [2024-11-06 13:50:46.775642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.536 [2024-11-06 13:50:46.775656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.536 [2024-11-06 13:50:46.784715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.536 [2024-11-06 13:50:46.784729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.536 [2024-11-06 13:50:46.793873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.536 [2024-11-06 13:50:46.793887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.536 [2024-11-06 13:50:46.802500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.536 [2024-11-06 13:50:46.802514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.536 [2024-11-06 13:50:46.811470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.536 [2024-11-06 13:50:46.811485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.796 [2024-11-06 13:50:46.820020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.796 [2024-11-06 13:50:46.820034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.796 [2024-11-06 13:50:46.828316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.796 [2024-11-06 13:50:46.828330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.796 [2024-11-06 13:50:46.837255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.796 [2024-11-06 13:50:46.837269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.796 [2024-11-06 13:50:46.845901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.796 [2024-11-06 13:50:46.845915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.796 [2024-11-06 13:50:46.854196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.796 [2024-11-06 13:50:46.854210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.796 [2024-11-06 13:50:46.863194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.796 [2024-11-06 13:50:46.863213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.796 [2024-11-06 13:50:46.871914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.796 [2024-11-06 13:50:46.871928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.796 [2024-11-06 13:50:46.880932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.796 [2024-11-06 13:50:46.880946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.796 [2024-11-06 13:50:46.890336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.796 [2024-11-06 13:50:46.890350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.796 [2024-11-06 13:50:46.899370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.796 [2024-11-06 13:50:46.899385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.796 [2024-11-06 13:50:46.908073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.796 [2024-11-06 13:50:46.908087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.796 [2024-11-06 13:50:46.916690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.796 [2024-11-06 13:50:46.916706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.796 [2024-11-06 13:50:46.925256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.796 [2024-11-06 13:50:46.925270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.796 [2024-11-06 13:50:46.933879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.796 [2024-11-06 13:50:46.933893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.796 [2024-11-06 13:50:46.942794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.796 [2024-11-06 13:50:46.942808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.796 [2024-11-06 13:50:46.951315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.796 [2024-11-06 13:50:46.951328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.796 [2024-11-06 13:50:46.959607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.796 [2024-11-06 13:50:46.959621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.796 [2024-11-06 13:50:46.968926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.796 [2024-11-06 13:50:46.968940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.796 [2024-11-06 13:50:46.977573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.796 [2024-11-06 13:50:46.977588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.796 [2024-11-06 13:50:46.986275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.796 [2024-11-06 13:50:46.986289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.796 [2024-11-06 13:50:46.994980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.797 [2024-11-06 13:50:46.994994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.797 [2024-11-06 13:50:47.004368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.797 [2024-11-06 13:50:47.004383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.797 [2024-11-06 13:50:47.013055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.797 [2024-11-06 13:50:47.013069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.797 [2024-11-06 13:50:47.021901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.797 [2024-11-06 13:50:47.021915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.797 [2024-11-06 13:50:47.030959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.797 [2024-11-06 13:50:47.030976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.797 [2024-11-06 13:50:47.039968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.797 [2024-11-06 13:50:47.039982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.797 [2024-11-06 13:50:47.048811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.797 [2024-11-06 13:50:47.048825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.797 [2024-11-06 13:50:47.058415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.797 [2024-11-06 13:50:47.058429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.797 [2024-11-06 13:50:47.067414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.797 [2024-11-06 13:50:47.067428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:07.797 [2024-11-06 13:50:47.076190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:07.797 [2024-11-06 13:50:47.076204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.057 [2024-11-06 13:50:47.084822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.057 [2024-11-06 13:50:47.084836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.057 [2024-11-06 13:50:47.093860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.057 [2024-11-06 13:50:47.093874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.057 [2024-11-06 13:50:47.102532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.057 [2024-11-06 13:50:47.102546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.057 [2024-11-06 13:50:47.111147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.057 [2024-11-06 13:50:47.111161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.057 [2024-11-06 13:50:47.119009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.057 [2024-11-06 13:50:47.119023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.057 [2024-11-06 13:50:47.128044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.057 [2024-11-06 13:50:47.128058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.057 [2024-11-06 13:50:47.136933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.057 [2024-11-06 13:50:47.136947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.057 [2024-11-06 13:50:47.145770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.057 [2024-11-06 13:50:47.145784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.057 [2024-11-06 13:50:47.154379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.057 [2024-11-06 13:50:47.154393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.057 [2024-11-06 13:50:47.163497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.057 [2024-11-06 13:50:47.163511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.057 [2024-11-06 13:50:47.172584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.057 [2024-11-06 13:50:47.172599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.057 [2024-11-06 13:50:47.181089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.057 [2024-11-06 13:50:47.181103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.057 [2024-11-06 13:50:47.190139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.057 [2024-11-06 13:50:47.190153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.057 [2024-11-06 13:50:47.199226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.057 [2024-11-06 13:50:47.199249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.057 [2024-11-06 13:50:47.208058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.057 [2024-11-06 13:50:47.208072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.057 [2024-11-06 13:50:47.217092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.058 [2024-11-06 13:50:47.217106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.058 [2024-11-06 13:50:47.225676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.058 [2024-11-06 13:50:47.225690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.058 [2024-11-06 13:50:47.234382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.058 [2024-11-06 13:50:47.234397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.058 [2024-11-06 13:50:47.243104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.058 [2024-11-06 13:50:47.243118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.058 [2024-11-06 13:50:47.251701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.058 [2024-11-06 13:50:47.251715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.058 [2024-11-06 13:50:47.260375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.058 [2024-11-06 13:50:47.260389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.058 [2024-11-06 13:50:47.269311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.058 [2024-11-06 13:50:47.269325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.058 [2024-11-06 13:50:47.278078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.058 [2024-11-06 13:50:47.278094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.058 [2024-11-06 13:50:47.287233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.058 [2024-11-06 13:50:47.287251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.058 [2024-11-06 13:50:47.296635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.058 [2024-11-06 13:50:47.296649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.058 [2024-11-06 13:50:47.305210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.058 [2024-11-06 13:50:47.305224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.058 [2024-11-06 13:50:47.313691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.058 [2024-11-06 13:50:47.313705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.058 [2024-11-06 13:50:47.322416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.058 [2024-11-06 13:50:47.322430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.058 [2024-11-06 13:50:47.331209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.058 [2024-11-06 13:50:47.331224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.058 [2024-11-06 13:50:47.340407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.058 [2024-11-06 13:50:47.340422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.318 [2024-11-06 13:50:47.349384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.318 [2024-11-06 13:50:47.349399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.318 [2024-11-06 13:50:47.358230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.318 [2024-11-06 13:50:47.358249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.318 [2024-11-06 13:50:47.367410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.318 [2024-11-06 13:50:47.367424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.318 [2024-11-06 13:50:47.376454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.318 [2024-11-06 13:50:47.376469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.318 [2024-11-06 13:50:47.385494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.318 [2024-11-06 13:50:47.385509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.318 [2024-11-06 13:50:47.394767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.318 [2024-11-06 13:50:47.394781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.318 [2024-11-06 13:50:47.403302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.318 [2024-11-06 13:50:47.403316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.318 [2024-11-06 13:50:47.412039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.318 [2024-11-06 13:50:47.412053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.318 [2024-11-06 13:50:47.420994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.318 [2024-11-06 13:50:47.421008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.318 [2024-11-06 13:50:47.428905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.318 [2024-11-06 13:50:47.428920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.318 [2024-11-06 13:50:47.438258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.318 [2024-11-06 13:50:47.438272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.318 [2024-11-06 13:50:47.447354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.318 [2024-11-06 13:50:47.447368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.318 [2024-11-06 13:50:47.456395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.318 [2024-11-06 13:50:47.456410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.318 [2024-11-06 13:50:47.465415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.318 [2024-11-06 13:50:47.465429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.318 [2024-11-06 13:50:47.474463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.318 [2024-11-06 13:50:47.474478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.318 [2024-11-06 13:50:47.483290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.318 [2024-11-06 13:50:47.483304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.318 [2024-11-06 13:50:47.492194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.318 [2024-11-06 13:50:47.492209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.318 [2024-11-06 13:50:47.505400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.318 [2024-11-06 13:50:47.505414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.318 19486.75 IOPS, 152.24 MiB/s [2024-11-06T12:50:47.602Z] [2024-11-06 13:50:47.513742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.318 [2024-11-06 13:50:47.513756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.318 [2024-11-06 13:50:47.523057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.318 [2024-11-06 13:50:47.523071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.318 [2024-11-06 13:50:47.532190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.318 [2024-11-06 13:50:47.532204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.318 [2024-11-06 13:50:47.540690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.318 [2024-11-06 13:50:47.540705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.318 [2024-11-06 13:50:47.550052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.318 [2024-11-06 13:50:47.550066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.318 [2024-11-06 13:50:47.558616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.318 [2024-11-06 13:50:47.558631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.318 [2024-11-06 13:50:47.567657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.318 [2024-11-06 13:50:47.567671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.318 [2024-11-06 13:50:47.575954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.318 [2024-11-06 13:50:47.575968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.318 [2024-11-06 13:50:47.585069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.318 [2024-11-06 13:50:47.585084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.318 [2024-11-06 13:50:47.593791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.318 [2024-11-06 13:50:47.593805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.579 [2024-11-06 13:50:47.602786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.579 [2024-11-06 13:50:47.602800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.579 [2024-11-06 13:50:47.611352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.579 [2024-11-06 13:50:47.611366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.579 [2024-11-06 13:50:47.620140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.579 [2024-11-06 13:50:47.620157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.579 [2024-11-06 13:50:47.628977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.579 [2024-11-06 13:50:47.628991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.579 [2024-11-06 13:50:47.638331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.579 [2024-11-06 13:50:47.638345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.579 [2024-11-06 13:50:47.646941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.579 [2024-11-06 13:50:47.646955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.579 [2024-11-06 13:50:47.655496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.579 [2024-11-06 13:50:47.655510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.579 [2024-11-06 13:50:47.664640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.579 [2024-11-06 13:50:47.664655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.579 [2024-11-06 13:50:47.673942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.579 [2024-11-06 13:50:47.673956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.579 [2024-11-06 13:50:47.682308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.579 [2024-11-06 13:50:47.682322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.579 [2024-11-06 13:50:47.691059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.579 [2024-11-06 13:50:47.691073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.579 [2024-11-06 13:50:47.699845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.579 [2024-11-06 13:50:47.699869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.579 [2024-11-06 13:50:47.708761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.579 [2024-11-06 13:50:47.708775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.579 [2024-11-06 13:50:47.717581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.579 [2024-11-06 13:50:47.717595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.579 [2024-11-06 13:50:47.726337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.579 [2024-11-06 13:50:47.726351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.579 [2024-11-06 13:50:47.734273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.579 [2024-11-06 13:50:47.734287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.579 [2024-11-06 13:50:47.743065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.579 [2024-11-06 13:50:47.743079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.579 [2024-11-06 13:50:47.752389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.579 [2024-11-06 13:50:47.752404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.579 [2024-11-06 13:50:47.760867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.579 [2024-11-06 13:50:47.760882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.579 [2024-11-06 13:50:47.770027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.579 [2024-11-06 13:50:47.770042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.579 [2024-11-06 13:50:47.779167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.579 [2024-11-06 13:50:47.779181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.579 [2024-11-06 13:50:47.788105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.579 [2024-11-06 13:50:47.788119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.579 [2024-11-06 13:50:47.797084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.579 [2024-11-06 13:50:47.797098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.579 [2024-11-06 13:50:47.806083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.579 [2024-11-06 13:50:47.806097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.579 [2024-11-06 13:50:47.814871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.579 [2024-11-06 13:50:47.814886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.579 [2024-11-06 13:50:47.823908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.579 [2024-11-06 13:50:47.823923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.579 [2024-11-06 13:50:47.832578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.579 [2024-11-06 13:50:47.832592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.579 [2024-11-06 13:50:47.841926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.579 [2024-11-06 13:50:47.841941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.579 [2024-11-06 13:50:47.850578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.579 [2024-11-06 13:50:47.850593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.579 [2024-11-06 13:50:47.859457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.579 [2024-11-06 13:50:47.859472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.840 [2024-11-06 13:50:47.868072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.840 [2024-11-06 13:50:47.868091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.840 [2024-11-06 13:50:47.877298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.840 [2024-11-06 13:50:47.877313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.840 [2024-11-06 13:50:47.885870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.840 [2024-11-06 13:50:47.885885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.840 [2024-11-06 13:50:47.894650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.840 [2024-11-06 13:50:47.894665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.840 [2024-11-06 13:50:47.903056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.840 [2024-11-06 13:50:47.903071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.840 [2024-11-06 13:50:47.911720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.840 [2024-11-06 13:50:47.911735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.840 [2024-11-06 13:50:47.920407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.840 [2024-11-06 13:50:47.920422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.840 [2024-11-06 13:50:47.929085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.840 [2024-11-06 13:50:47.929099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.840 [2024-11-06 13:50:47.937765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.840 [2024-11-06 13:50:47.937779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.840 [2024-11-06 13:50:47.946648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.840 [2024-11-06 13:50:47.946663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.840 [2024-11-06 13:50:47.955701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.840 [2024-11-06 13:50:47.955716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.840 [2024-11-06 13:50:47.964500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.840 [2024-11-06 13:50:47.964514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.840 [2024-11-06 13:50:47.972267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.840 [2024-11-06 13:50:47.972281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.840 [2024-11-06 13:50:47.981549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.840 [2024-11-06 13:50:47.981563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.840 [2024-11-06 13:50:47.990015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.840 [2024-11-06 13:50:47.990031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.840 [2024-11-06 13:50:47.998834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.840 [2024-11-06 13:50:47.998849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.840 [2024-11-06 13:50:48.007645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.840 [2024-11-06 13:50:48.007659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.840 [2024-11-06 13:50:48.016333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.840 [2024-11-06 13:50:48.016347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.840 [2024-11-06 13:50:48.025603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.840 [2024-11-06 13:50:48.025618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.840 [2024-11-06 13:50:48.034495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.840 [2024-11-06 13:50:48.034513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.840 [2024-11-06 13:50:48.043287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.840 [2024-11-06 13:50:48.043302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.840 [2024-11-06 13:50:48.051985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.840 [2024-11-06 13:50:48.051999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.840 [2024-11-06 13:50:48.060636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.840 [2024-11-06 13:50:48.060650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.840 [2024-11-06 13:50:48.069181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.840 [2024-11-06 13:50:48.069196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.840 [2024-11-06 13:50:48.077633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.840 [2024-11-06 13:50:48.077648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.840 [2024-11-06 13:50:48.086709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.840 [2024-11-06 13:50:48.086723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.840 [2024-11-06 13:50:48.095411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.840 [2024-11-06 13:50:48.095425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.840 [2024-11-06 13:50:48.104391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.840 [2024-11-06 13:50:48.104406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.840 [2024-11-06 13:50:48.113377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.840 [2024-11-06 13:50:48.113391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:08.840 [2024-11-06 13:50:48.122361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:08.840 [2024-11-06 13:50:48.122375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.101 [2024-11-06 13:50:48.131456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.101 [2024-11-06 13:50:48.131470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.101 [2024-11-06 13:50:48.140444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.101 [2024-11-06 13:50:48.140459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.101 [2024-11-06 13:50:48.149973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.101 [2024-11-06 13:50:48.149988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.101 [2024-11-06 13:50:48.158424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.101 [2024-11-06 13:50:48.158438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.101 [2024-11-06 13:50:48.167121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.101 [2024-11-06 13:50:48.167135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.101 [2024-11-06 13:50:48.176150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.101 [2024-11-06 13:50:48.176164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.101 [2024-11-06 13:50:48.185164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.101 [2024-11-06 13:50:48.185179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.101 [2024-11-06 13:50:48.194185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.101 [2024-11-06 13:50:48.194200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.101 [2024-11-06 13:50:48.203225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.101 [2024-11-06 13:50:48.203249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.101 [2024-11-06 13:50:48.212268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.101 [2024-11-06 13:50:48.212283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.101 [2024-11-06 13:50:48.221053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.101 [2024-11-06 13:50:48.221068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.101 [2024-11-06 13:50:48.229417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.101 [2024-11-06 13:50:48.229432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.101 [2024-11-06 13:50:48.238579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.101 [2024-11-06 13:50:48.238595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.101 [2024-11-06 13:50:48.247641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.101 [2024-11-06 13:50:48.247655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.101 [2024-11-06 13:50:48.256401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.101 [2024-11-06 13:50:48.256415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.101 [2024-11-06 13:50:48.265135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.101 [2024-11-06 13:50:48.265149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.101 [2024-11-06 13:50:48.273547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.101 [2024-11-06 13:50:48.273561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.101 [2024-11-06 13:50:48.282621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.101 [2024-11-06 13:50:48.282636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.101 [2024-11-06 13:50:48.291046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.101 [2024-11-06 13:50:48.291061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.101 [2024-11-06 13:50:48.299833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.101 [2024-11-06 13:50:48.299847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.101 [2024-11-06 13:50:48.308675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.101 [2024-11-06 13:50:48.308692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.101 [2024-11-06 13:50:48.317303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.101 [2024-11-06 13:50:48.317318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.101 [2024-11-06 13:50:48.325707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.101 [2024-11-06 13:50:48.325721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.101 [2024-11-06 13:50:48.334479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.101 [2024-11-06 13:50:48.334493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.101 [2024-11-06 13:50:48.343285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.101 [2024-11-06 13:50:48.343301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.101 [2024-11-06 13:50:48.352169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.101 [2024-11-06 13:50:48.352183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.101 [2024-11-06 13:50:48.361143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.101 [2024-11-06 13:50:48.361157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.101 [2024-11-06 13:50:48.370210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.101 [2024-11-06 13:50:48.370228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.101 [2024-11-06 13:50:48.378590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.101 [2024-11-06 13:50:48.378603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.362 [2024-11-06 13:50:48.387497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.362 [2024-11-06 13:50:48.387511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.362 [2024-11-06 13:50:48.396193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.362 [2024-11-06 13:50:48.396207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.362 [2024-11-06 13:50:48.405196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.362 [2024-11-06 13:50:48.405211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.362 [2024-11-06 13:50:48.414273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.362 [2024-11-06 13:50:48.414287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.362 [2024-11-06 13:50:48.423437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.362 [2024-11-06 13:50:48.423451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.362 [2024-11-06 13:50:48.432414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.362 [2024-11-06 13:50:48.432428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.362 [2024-11-06 13:50:48.441300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.362 [2024-11-06 13:50:48.441314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.362 [2024-11-06 13:50:48.450358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.362 [2024-11-06 13:50:48.450373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.362 [2024-11-06 13:50:48.459136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.362 [2024-11-06 13:50:48.459151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.362 [2024-11-06 13:50:48.467981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.362 [2024-11-06 13:50:48.467997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.362 [2024-11-06 13:50:48.476887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.362 [2024-11-06 13:50:48.476901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.362 [2024-11-06 13:50:48.485490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.362 [2024-11-06 13:50:48.485504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.362 [2024-11-06 13:50:48.494261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.362 [2024-11-06 13:50:48.494275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.362 [2024-11-06 13:50:48.502678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.362 [2024-11-06 13:50:48.502692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.362 19496.80 IOPS, 152.32 MiB/s [2024-11-06T12:50:48.646Z] [2024-11-06 13:50:48.511663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.362 [2024-11-06 13:50:48.511677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.362 00:07:09.362 Latency(us) 00:07:09.362 [2024-11-06T12:50:48.646Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:09.362 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:07:09.362 Nvme1n1 : 5.01 19499.60 152.34 0.00 0.00 6558.50 2798.93 14199.47 00:07:09.362 [2024-11-06T12:50:48.646Z] =================================================================================================================== 00:07:09.362 [2024-11-06T12:50:48.646Z] Total : 19499.60 152.34 0.00 0.00 6558.50 2798.93 14199.47 00:07:09.362 [2024-11-06 13:50:48.517494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.362 [2024-11-06 13:50:48.517507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.362 [2024-11-06 13:50:48.525511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.362 [2024-11-06 13:50:48.525522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.362 [2024-11-06 13:50:48.533531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.362 [2024-11-06 13:50:48.533540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.362 [2024-11-06 13:50:48.541559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.362 [2024-11-06 13:50:48.541570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.362 [2024-11-06 13:50:48.549573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.362 [2024-11-06 13:50:48.549582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.362 [2024-11-06 13:50:48.557591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.362 [2024-11-06 13:50:48.557600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.362 [2024-11-06 13:50:48.565612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.362 [2024-11-06 13:50:48.565621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.362 [2024-11-06 13:50:48.573631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.362 [2024-11-06 13:50:48.573639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.362 [2024-11-06 13:50:48.581650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.362 [2024-11-06 13:50:48.581658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.362 [2024-11-06 13:50:48.589670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.362 [2024-11-06 13:50:48.589677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.362 [2024-11-06 13:50:48.597693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.362 [2024-11-06 13:50:48.597701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.362 [2024-11-06 13:50:48.605713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.362 [2024-11-06 13:50:48.605721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.362 [2024-11-06 13:50:48.613732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:09.362 [2024-11-06 13:50:48.613739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:09.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (688673) - No such process 00:07:09.362 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 688673 00:07:09.362 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.362 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.362 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:09.362 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.362 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:09.362 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.362 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:09.362 delay0 00:07:09.362 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.362 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:07:09.362 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.363 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:09.363 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.363 13:50:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:07:09.622 [2024-11-06 13:50:48.732310] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:16.195 Initializing NVMe Controllers 00:07:16.195 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:16.195 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:16.195 Initialization complete. Launching workers. 00:07:16.195 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 82 00:07:16.195 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 369, failed to submit 33 00:07:16.195 success 173, unsuccessful 196, failed 0 00:07:16.195 13:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:07:16.195 13:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:07:16.195 13:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:16.195 13:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:07:16.195 13:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:16.195 13:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:07:16.195 13:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:16.195 13:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:16.195 rmmod nvme_tcp 00:07:16.195 rmmod nvme_fabrics 00:07:16.195 rmmod nvme_keyring 00:07:16.195 13:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:16.195 13:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:07:16.195 13:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:07:16.195 13:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 686297 ']' 00:07:16.195 13:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 686297 00:07:16.195 13:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 686297 ']' 00:07:16.195 13:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 686297 00:07:16.195 13:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:07:16.195 13:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:16.195 13:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 686297 00:07:16.195 13:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:16.195 13:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:16.195 13:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 686297' 00:07:16.195 killing process with pid 686297 00:07:16.195 13:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 686297 00:07:16.195 13:50:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 686297 00:07:16.195 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:16.195 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:16.195 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:16.195 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:07:16.195 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:07:16.195 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:07:16.195 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:16.195 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:16.195 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:16.196 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.196 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:16.196 13:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:18.104 00:07:18.104 real 0m31.248s 00:07:18.104 user 0m44.257s 00:07:18.104 sys 0m7.981s 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:18.104 ************************************ 00:07:18.104 END TEST nvmf_zcopy 00:07:18.104 ************************************ 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:18.104 ************************************ 00:07:18.104 START TEST nvmf_nmic 00:07:18.104 ************************************ 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:07:18.104 * Looking for test storage... 00:07:18.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:18.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.104 --rc genhtml_branch_coverage=1 00:07:18.104 --rc genhtml_function_coverage=1 00:07:18.104 --rc genhtml_legend=1 00:07:18.104 --rc geninfo_all_blocks=1 00:07:18.104 --rc geninfo_unexecuted_blocks=1 00:07:18.104 00:07:18.104 ' 00:07:18.104 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:18.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.104 --rc genhtml_branch_coverage=1 00:07:18.104 --rc genhtml_function_coverage=1 00:07:18.104 --rc genhtml_legend=1 00:07:18.104 --rc geninfo_all_blocks=1 00:07:18.104 --rc geninfo_unexecuted_blocks=1 00:07:18.104 00:07:18.104 ' 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:18.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.105 --rc genhtml_branch_coverage=1 00:07:18.105 --rc genhtml_function_coverage=1 00:07:18.105 --rc genhtml_legend=1 00:07:18.105 --rc geninfo_all_blocks=1 00:07:18.105 --rc geninfo_unexecuted_blocks=1 00:07:18.105 00:07:18.105 ' 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:18.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.105 --rc genhtml_branch_coverage=1 00:07:18.105 --rc genhtml_function_coverage=1 00:07:18.105 --rc genhtml_legend=1 00:07:18.105 --rc geninfo_all_blocks=1 00:07:18.105 --rc geninfo_unexecuted_blocks=1 00:07:18.105 00:07:18.105 ' 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:18.105 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:07:18.105 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:18.106 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:18.106 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:18.106 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:18.106 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:18.106 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.106 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:18.106 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.106 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:18.106 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:18.106 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:07:18.106 13:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:24.684 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:24.684 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:24.684 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:24.685 Found net devices under 0000:31:00.0: cvl_0_0 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:24.685 Found net devices under 0000:31:00.1: cvl_0_1 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:24.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:24.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:07:24.685 00:07:24.685 --- 10.0.0.2 ping statistics --- 00:07:24.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:24.685 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:24.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:24.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:07:24.685 00:07:24.685 --- 10.0.0.1 ping statistics --- 00:07:24.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:24.685 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:24.685 13:51:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:24.685 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:07:24.685 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:24.685 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:24.685 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:24.685 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=695789 00:07:24.685 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 695789 00:07:24.685 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 695789 ']' 00:07:24.685 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.685 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:24.685 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:24.685 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.685 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:24.685 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:24.685 [2024-11-06 13:51:03.071362] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:07:24.685 [2024-11-06 13:51:03.071423] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:24.685 [2024-11-06 13:51:03.162770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:24.685 [2024-11-06 13:51:03.217892] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:24.685 [2024-11-06 13:51:03.217944] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:24.685 [2024-11-06 13:51:03.217953] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:24.685 [2024-11-06 13:51:03.217961] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:24.685 [2024-11-06 13:51:03.217967] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:24.685 [2024-11-06 13:51:03.220074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.685 [2024-11-06 13:51:03.220236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.685 [2024-11-06 13:51:03.220372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:24.685 [2024-11-06 13:51:03.220373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.685 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:24.685 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:07:24.685 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:24.685 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:24.685 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:24.685 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:24.685 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:24.685 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.685 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:24.685 [2024-11-06 13:51:03.893702] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:24.685 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.685 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:24.685 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.685 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:24.685 Malloc0 00:07:24.685 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.685 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:24.685 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.685 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:24.686 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.686 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:24.686 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.686 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:24.686 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.686 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:24.686 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.686 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:24.686 [2024-11-06 13:51:03.952907] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:24.686 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.686 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:07:24.686 test case1: single bdev can't be used in multiple subsystems 00:07:24.686 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:07:24.686 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.686 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:24.686 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.686 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:24.686 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.686 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:24.946 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.946 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:07:24.946 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:07:24.946 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.946 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:24.946 [2024-11-06 13:51:03.976771] bdev.c:8462:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:07:24.946 [2024-11-06 13:51:03.976789] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:07:24.946 [2024-11-06 13:51:03.976797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:24.946 request: 00:07:24.946 { 00:07:24.946 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:24.946 "namespace": { 00:07:24.946 "bdev_name": "Malloc0", 00:07:24.946 "no_auto_visible": false 00:07:24.946 }, 00:07:24.946 "method": "nvmf_subsystem_add_ns", 00:07:24.946 "req_id": 1 00:07:24.946 } 00:07:24.946 Got JSON-RPC error response 00:07:24.946 response: 00:07:24.946 { 00:07:24.946 "code": -32602, 00:07:24.946 "message": "Invalid parameters" 00:07:24.946 } 00:07:24.946 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:24.946 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:07:24.946 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:07:24.946 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:07:24.946 Adding namespace failed - expected result. 00:07:24.946 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:07:24.946 test case2: host connect to nvmf target in multiple paths 00:07:24.946 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:07:24.946 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.946 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:24.946 [2024-11-06 13:51:03.984898] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:07:24.946 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.946 13:51:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:26.324 13:51:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:07:27.703 13:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:07:27.703 13:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:07:27.703 13:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:07:27.703 13:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:07:27.703 13:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:07:30.241 13:51:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:07:30.241 13:51:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:07:30.241 13:51:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:07:30.241 13:51:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:07:30.241 13:51:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:07:30.241 13:51:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:07:30.241 13:51:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:07:30.241 [global] 00:07:30.241 thread=1 00:07:30.241 invalidate=1 00:07:30.241 rw=write 00:07:30.241 time_based=1 00:07:30.241 runtime=1 00:07:30.241 ioengine=libaio 00:07:30.241 direct=1 00:07:30.241 bs=4096 00:07:30.241 iodepth=1 00:07:30.241 norandommap=0 00:07:30.241 numjobs=1 00:07:30.241 00:07:30.241 verify_dump=1 00:07:30.241 verify_backlog=512 00:07:30.241 verify_state_save=0 00:07:30.241 do_verify=1 00:07:30.241 verify=crc32c-intel 00:07:30.241 [job0] 00:07:30.241 filename=/dev/nvme0n1 00:07:30.241 Could not set queue depth (nvme0n1) 00:07:30.241 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:07:30.241 fio-3.35 00:07:30.241 Starting 1 thread 00:07:31.181 00:07:31.181 job0: (groupid=0, jobs=1): err= 0: pid=697654: Wed Nov 6 13:51:10 2024 00:07:31.181 read: IOPS=19, BW=77.9KiB/s (79.8kB/s)(80.0KiB/1027msec) 00:07:31.181 slat (nsec): min=4075, max=27785, avg=23997.05, stdev=5799.66 00:07:31.182 clat (usec): min=798, max=42982, avg=39996.37, stdev=9240.83 00:07:31.182 lat (usec): min=802, max=43008, avg=40020.37, stdev=9245.47 00:07:31.182 clat percentiles (usec): 00:07:31.182 | 1.00th=[ 799], 5.00th=[ 799], 10.00th=[41157], 20.00th=[41681], 00:07:31.182 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:07:31.182 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:07:31.182 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:07:31.182 | 99.99th=[42730] 00:07:31.182 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:07:31.182 slat (nsec): min=3266, max=52541, avg=11820.56, stdev=4593.76 00:07:31.182 clat (usec): min=128, max=722, avg=426.31, stdev=105.80 00:07:31.182 lat (usec): min=133, max=735, avg=438.13, stdev=107.69 00:07:31.182 clat percentiles (usec): 00:07:31.182 | 1.00th=[ 186], 5.00th=[ 235], 10.00th=[ 281], 20.00th=[ 330], 00:07:31.182 | 30.00th=[ 371], 40.00th=[ 412], 50.00th=[ 437], 60.00th=[ 453], 00:07:31.182 | 70.00th=[ 478], 80.00th=[ 519], 90.00th=[ 562], 95.00th=[ 594], 00:07:31.182 | 99.00th=[ 660], 99.50th=[ 676], 99.90th=[ 725], 99.95th=[ 725], 00:07:31.182 | 99.99th=[ 725] 00:07:31.182 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:07:31.182 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:07:31.182 lat (usec) : 250=6.58%, 500=66.92%, 750=22.74%, 1000=0.19% 00:07:31.182 lat (msec) : 50=3.57% 00:07:31.182 cpu : usr=0.68%, sys=0.68%, ctx=532, majf=0, minf=1 00:07:31.182 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:07:31.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:31.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:31.182 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:31.182 latency : target=0, window=0, percentile=100.00%, depth=1 00:07:31.182 00:07:31.182 Run status group 0 (all jobs): 00:07:31.182 READ: bw=77.9KiB/s (79.8kB/s), 77.9KiB/s-77.9KiB/s (79.8kB/s-79.8kB/s), io=80.0KiB (81.9kB), run=1027-1027msec 00:07:31.182 WRITE: bw=1994KiB/s (2042kB/s), 1994KiB/s-1994KiB/s (2042kB/s-2042kB/s), io=2048KiB (2097kB), run=1027-1027msec 00:07:31.182 00:07:31.182 Disk stats (read/write): 00:07:31.182 nvme0n1: ios=66/512, merge=0/0, ticks=1003/186, in_queue=1189, util=97.60% 00:07:31.442 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:31.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:07:31.442 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:31.442 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:07:31.442 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:31.442 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:07:31.442 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:07:31.442 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:31.442 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:07:31.442 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:07:31.442 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:07:31.442 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:31.442 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:07:31.442 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:31.442 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:07:31.442 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:31.442 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:31.442 rmmod nvme_tcp 00:07:31.442 rmmod nvme_fabrics 00:07:31.442 rmmod nvme_keyring 00:07:31.442 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:31.442 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:07:31.442 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:07:31.442 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 695789 ']' 00:07:31.442 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 695789 00:07:31.442 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 695789 ']' 00:07:31.442 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 695789 00:07:31.442 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:07:31.442 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:31.442 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 695789 00:07:31.442 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:31.442 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:31.442 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 695789' 00:07:31.442 killing process with pid 695789 00:07:31.442 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 695789 00:07:31.442 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 695789 00:07:31.701 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:31.701 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:31.701 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:31.701 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:07:31.701 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:31.701 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:07:31.701 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:07:31.701 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:31.701 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:31.701 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.701 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:31.701 13:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.718 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:33.718 00:07:33.718 real 0m15.741s 00:07:33.718 user 0m42.819s 00:07:33.718 sys 0m4.964s 00:07:33.718 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:33.718 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:33.718 ************************************ 00:07:33.718 END TEST nvmf_nmic 00:07:33.718 ************************************ 00:07:33.718 13:51:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:07:33.718 13:51:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:33.718 13:51:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:33.718 13:51:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:33.718 ************************************ 00:07:33.718 START TEST nvmf_fio_target 00:07:33.718 ************************************ 00:07:33.718 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:07:33.980 * Looking for test storage... 00:07:33.980 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:33.980 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:33.980 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:07:33.981 13:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:33.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.981 --rc genhtml_branch_coverage=1 00:07:33.981 --rc genhtml_function_coverage=1 00:07:33.981 --rc genhtml_legend=1 00:07:33.981 --rc geninfo_all_blocks=1 00:07:33.981 --rc geninfo_unexecuted_blocks=1 00:07:33.981 00:07:33.981 ' 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:33.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.981 --rc genhtml_branch_coverage=1 00:07:33.981 --rc genhtml_function_coverage=1 00:07:33.981 --rc genhtml_legend=1 00:07:33.981 --rc geninfo_all_blocks=1 00:07:33.981 --rc geninfo_unexecuted_blocks=1 00:07:33.981 00:07:33.981 ' 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:33.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.981 --rc genhtml_branch_coverage=1 00:07:33.981 --rc genhtml_function_coverage=1 00:07:33.981 --rc genhtml_legend=1 00:07:33.981 --rc geninfo_all_blocks=1 00:07:33.981 --rc geninfo_unexecuted_blocks=1 00:07:33.981 00:07:33.981 ' 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:33.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.981 --rc genhtml_branch_coverage=1 00:07:33.981 --rc genhtml_function_coverage=1 00:07:33.981 --rc genhtml_legend=1 00:07:33.981 --rc geninfo_all_blocks=1 00:07:33.981 --rc geninfo_unexecuted_blocks=1 00:07:33.981 00:07:33.981 ' 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:33.981 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:33.981 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:33.982 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:33.982 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:33.982 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:33.982 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.982 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:33.982 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.982 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:33.982 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:33.982 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:07:33.982 13:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:07:39.257 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:39.257 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:07:39.257 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:39.257 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:39.257 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:39.257 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:39.257 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:39.257 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:07:39.257 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:39.257 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:07:39.257 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:07:39.257 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:07:39.257 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:07:39.257 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:07:39.257 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:07:39.257 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:39.257 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:39.257 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:39.257 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:39.257 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:39.258 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:39.258 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:39.258 Found net devices under 0000:31:00.0: cvl_0_0 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:39.258 Found net devices under 0000:31:00.1: cvl_0_1 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:39.258 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:39.258 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.573 ms 00:07:39.258 00:07:39.258 --- 10.0.0.2 ping statistics --- 00:07:39.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.258 rtt min/avg/max/mdev = 0.573/0.573/0.573/0.000 ms 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:39.258 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:39.258 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:07:39.258 00:07:39.258 --- 10.0.0.1 ping statistics --- 00:07:39.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.258 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=702487 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 702487 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 702487 ']' 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.258 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:39.259 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.259 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:39.259 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:07:39.259 13:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:39.259 [2024-11-06 13:51:18.456082] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:07:39.259 [2024-11-06 13:51:18.456138] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.518 [2024-11-06 13:51:18.544056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:39.518 [2024-11-06 13:51:18.592885] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:39.518 [2024-11-06 13:51:18.592937] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:39.518 [2024-11-06 13:51:18.592945] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:39.518 [2024-11-06 13:51:18.592952] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:39.518 [2024-11-06 13:51:18.592958] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:39.518 [2024-11-06 13:51:18.595147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.518 [2024-11-06 13:51:18.595312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.518 [2024-11-06 13:51:18.595396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:39.518 [2024-11-06 13:51:18.595398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.087 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:40.087 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:07:40.087 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:40.087 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:40.087 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:07:40.087 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:40.087 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:40.346 [2024-11-06 13:51:19.409884] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:40.346 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:40.346 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:07:40.346 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:40.605 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:07:40.605 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:40.864 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:07:40.864 13:51:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:40.864 13:51:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:07:40.864 13:51:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:07:41.123 13:51:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:41.383 13:51:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:07:41.383 13:51:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:41.383 13:51:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:07:41.383 13:51:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:41.642 13:51:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:07:41.642 13:51:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:07:41.901 13:51:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:41.901 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:07:41.901 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:42.160 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:07:42.160 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:42.160 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:42.419 [2024-11-06 13:51:21.553652] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:42.419 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:07:42.679 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:07:42.679 13:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:44.585 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:07:44.585 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:07:44.585 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:07:44.585 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:07:44.585 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:07:44.585 13:51:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:07:46.490 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:07:46.490 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:07:46.490 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:07:46.490 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:07:46.490 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:07:46.490 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:07:46.490 13:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:07:46.490 [global] 00:07:46.490 thread=1 00:07:46.490 invalidate=1 00:07:46.490 rw=write 00:07:46.490 time_based=1 00:07:46.490 runtime=1 00:07:46.490 ioengine=libaio 00:07:46.491 direct=1 00:07:46.491 bs=4096 00:07:46.491 iodepth=1 00:07:46.491 norandommap=0 00:07:46.491 numjobs=1 00:07:46.491 00:07:46.491 verify_dump=1 00:07:46.491 verify_backlog=512 00:07:46.491 verify_state_save=0 00:07:46.491 do_verify=1 00:07:46.491 verify=crc32c-intel 00:07:46.491 [job0] 00:07:46.491 filename=/dev/nvme0n1 00:07:46.491 [job1] 00:07:46.491 filename=/dev/nvme0n2 00:07:46.491 [job2] 00:07:46.491 filename=/dev/nvme0n3 00:07:46.491 [job3] 00:07:46.491 filename=/dev/nvme0n4 00:07:46.491 Could not set queue depth (nvme0n1) 00:07:46.491 Could not set queue depth (nvme0n2) 00:07:46.491 Could not set queue depth (nvme0n3) 00:07:46.491 Could not set queue depth (nvme0n4) 00:07:46.491 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:07:46.491 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:07:46.491 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:07:46.491 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:07:46.491 fio-3.35 00:07:46.491 Starting 4 threads 00:07:47.875 00:07:47.875 job0: (groupid=0, jobs=1): err= 0: pid=704406: Wed Nov 6 13:51:26 2024 00:07:47.875 read: IOPS=18, BW=75.1KiB/s (76.9kB/s)(76.0KiB/1012msec) 00:07:47.875 slat (nsec): min=26275, max=27088, avg=26647.79, stdev=192.30 00:07:47.875 clat (usec): min=40914, max=42040, avg=41805.82, stdev=372.47 00:07:47.875 lat (usec): min=40940, max=42067, avg=41832.47, stdev=372.47 00:07:47.875 clat percentiles (usec): 00:07:47.875 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:07:47.875 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:07:47.875 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:07:47.875 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:07:47.875 | 99.99th=[42206] 00:07:47.875 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:07:47.875 slat (nsec): min=9698, max=65217, avg=26948.45, stdev=12354.07 00:07:47.875 clat (usec): min=212, max=622, avg=390.12, stdev=67.39 00:07:47.875 lat (usec): min=222, max=636, avg=417.07, stdev=72.59 00:07:47.875 clat percentiles (usec): 00:07:47.875 | 1.00th=[ 243], 5.00th=[ 289], 10.00th=[ 306], 20.00th=[ 322], 00:07:47.875 | 30.00th=[ 338], 40.00th=[ 375], 50.00th=[ 408], 60.00th=[ 420], 00:07:47.875 | 70.00th=[ 433], 80.00th=[ 449], 90.00th=[ 465], 95.00th=[ 482], 00:07:47.875 | 99.00th=[ 537], 99.50th=[ 553], 99.90th=[ 627], 99.95th=[ 627], 00:07:47.875 | 99.99th=[ 627] 00:07:47.875 bw ( KiB/s): min= 4096, max= 4096, per=34.00%, avg=4096.00, stdev= 0.00, samples=1 00:07:47.875 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:07:47.875 lat (usec) : 250=2.07%, 500=91.71%, 750=2.64% 00:07:47.875 lat (msec) : 50=3.58% 00:07:47.875 cpu : usr=0.69%, sys=1.29%, ctx=532, majf=0, minf=1 00:07:47.875 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:07:47.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:47.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:47.875 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:47.875 latency : target=0, window=0, percentile=100.00%, depth=1 00:07:47.875 job1: (groupid=0, jobs=1): err= 0: pid=704407: Wed Nov 6 13:51:26 2024 00:07:47.875 read: IOPS=18, BW=74.5KiB/s (76.3kB/s)(76.0KiB/1020msec) 00:07:47.875 slat (nsec): min=16740, max=26387, avg=25007.89, stdev=2870.50 00:07:47.875 clat (usec): min=947, max=42479, avg=39780.80, stdev=9407.87 00:07:47.875 lat (usec): min=973, max=42505, avg=39805.81, stdev=9407.63 00:07:47.875 clat percentiles (usec): 00:07:47.875 | 1.00th=[ 947], 5.00th=[ 947], 10.00th=[41157], 20.00th=[41681], 00:07:47.875 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:07:47.875 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:07:47.875 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:07:47.875 | 99.99th=[42730] 00:07:47.875 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:07:47.875 slat (nsec): min=4120, max=57620, avg=16422.63, stdev=8711.60 00:07:47.875 clat (usec): min=194, max=930, avg=493.34, stdev=128.29 00:07:47.875 lat (usec): min=206, max=945, avg=509.76, stdev=131.47 00:07:47.875 clat percentiles (usec): 00:07:47.875 | 1.00th=[ 225], 5.00th=[ 293], 10.00th=[ 338], 20.00th=[ 383], 00:07:47.875 | 30.00th=[ 416], 40.00th=[ 457], 50.00th=[ 486], 60.00th=[ 519], 00:07:47.875 | 70.00th=[ 553], 80.00th=[ 594], 90.00th=[ 668], 95.00th=[ 717], 00:07:47.875 | 99.00th=[ 799], 99.50th=[ 857], 99.90th=[ 930], 99.95th=[ 930], 00:07:47.875 | 99.99th=[ 930] 00:07:47.875 bw ( KiB/s): min= 4096, max= 4096, per=34.00%, avg=4096.00, stdev= 0.00, samples=1 00:07:47.875 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:07:47.875 lat (usec) : 250=2.82%, 500=47.83%, 750=42.75%, 1000=3.20% 00:07:47.875 lat (msec) : 50=3.39% 00:07:47.875 cpu : usr=0.49%, sys=0.59%, ctx=532, majf=0, minf=2 00:07:47.875 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:07:47.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:47.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:47.875 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:47.875 latency : target=0, window=0, percentile=100.00%, depth=1 00:07:47.875 job2: (groupid=0, jobs=1): err= 0: pid=704408: Wed Nov 6 13:51:26 2024 00:07:47.875 read: IOPS=851, BW=3405KiB/s (3486kB/s)(3408KiB/1001msec) 00:07:47.875 slat (nsec): min=3074, max=44282, avg=14108.24, stdev=8174.97 00:07:47.875 clat (usec): min=105, max=911, avg=600.10, stdev=171.10 00:07:47.875 lat (usec): min=108, max=915, avg=614.21, stdev=173.47 00:07:47.875 clat percentiles (usec): 00:07:47.875 | 1.00th=[ 115], 5.00th=[ 131], 10.00th=[ 247], 20.00th=[ 562], 00:07:47.875 | 30.00th=[ 611], 40.00th=[ 635], 50.00th=[ 652], 60.00th=[ 676], 00:07:47.875 | 70.00th=[ 693], 80.00th=[ 709], 90.00th=[ 734], 95.00th=[ 758], 00:07:47.875 | 99.00th=[ 799], 99.50th=[ 807], 99.90th=[ 914], 99.95th=[ 914], 00:07:47.875 | 99.99th=[ 914] 00:07:47.875 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:07:47.875 slat (nsec): min=4130, max=64844, avg=13836.69, stdev=7611.18 00:07:47.875 clat (usec): min=82, max=705, avg=444.83, stdev=110.23 00:07:47.875 lat (usec): min=87, max=710, avg=458.66, stdev=111.83 00:07:47.875 clat percentiles (usec): 00:07:47.875 | 1.00th=[ 88], 5.00th=[ 116], 10.00th=[ 334], 20.00th=[ 400], 00:07:47.875 | 30.00th=[ 424], 40.00th=[ 445], 50.00th=[ 469], 60.00th=[ 482], 00:07:47.875 | 70.00th=[ 502], 80.00th=[ 519], 90.00th=[ 553], 95.00th=[ 578], 00:07:47.875 | 99.00th=[ 644], 99.50th=[ 652], 99.90th=[ 701], 99.95th=[ 709], 00:07:47.875 | 99.99th=[ 709] 00:07:47.875 bw ( KiB/s): min= 4096, max= 4096, per=34.00%, avg=4096.00, stdev= 0.00, samples=1 00:07:47.875 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:07:47.875 lat (usec) : 100=1.28%, 250=7.25%, 500=36.83%, 750=51.60%, 1000=3.04% 00:07:47.875 cpu : usr=1.80%, sys=2.00%, ctx=1877, majf=0, minf=2 00:07:47.875 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:07:47.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:47.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:47.875 issued rwts: total=852,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:47.875 latency : target=0, window=0, percentile=100.00%, depth=1 00:07:47.875 job3: (groupid=0, jobs=1): err= 0: pid=704409: Wed Nov 6 13:51:26 2024 00:07:47.875 read: IOPS=1011, BW=4048KiB/s (4145kB/s)(4052KiB/1001msec) 00:07:47.875 slat (nsec): min=3075, max=43573, avg=14271.16, stdev=7516.65 00:07:47.875 clat (usec): min=123, max=42165, avg=694.94, stdev=1843.30 00:07:47.875 lat (usec): min=126, max=42176, avg=709.22, stdev=1843.87 00:07:47.875 clat percentiles (usec): 00:07:47.875 | 1.00th=[ 139], 5.00th=[ 161], 10.00th=[ 235], 20.00th=[ 302], 00:07:47.875 | 30.00th=[ 400], 40.00th=[ 461], 50.00th=[ 570], 60.00th=[ 807], 00:07:47.875 | 70.00th=[ 873], 80.00th=[ 922], 90.00th=[ 971], 95.00th=[ 1004], 00:07:47.875 | 99.00th=[ 1074], 99.50th=[ 1090], 99.90th=[41157], 99.95th=[42206], 00:07:47.875 | 99.99th=[42206] 00:07:47.875 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:07:47.875 slat (nsec): min=4041, max=46340, avg=10046.70, stdev=5484.57 00:07:47.875 clat (usec): min=82, max=733, avg=257.41, stdev=134.50 00:07:47.875 lat (usec): min=87, max=748, avg=267.46, stdev=137.15 00:07:47.875 clat percentiles (usec): 00:07:47.875 | 1.00th=[ 86], 5.00th=[ 93], 10.00th=[ 102], 20.00th=[ 117], 00:07:47.875 | 30.00th=[ 182], 40.00th=[ 215], 50.00th=[ 237], 60.00th=[ 269], 00:07:47.875 | 70.00th=[ 310], 80.00th=[ 363], 90.00th=[ 453], 95.00th=[ 529], 00:07:47.875 | 99.00th=[ 660], 99.50th=[ 676], 99.90th=[ 693], 99.95th=[ 734], 00:07:47.875 | 99.99th=[ 734] 00:07:47.875 bw ( KiB/s): min= 7432, max= 7432, per=61.69%, avg=7432.00, stdev= 0.00, samples=1 00:07:47.875 iops : min= 1858, max= 1858, avg=1858.00, stdev= 0.00, samples=1 00:07:47.875 lat (usec) : 100=4.27%, 250=30.00%, 500=35.64%, 750=7.81%, 1000=19.59% 00:07:47.875 lat (msec) : 2=2.60%, 50=0.10% 00:07:47.876 cpu : usr=1.50%, sys=2.30%, ctx=2038, majf=0, minf=1 00:07:47.876 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:07:47.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:47.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:47.876 issued rwts: total=1013,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:47.876 latency : target=0, window=0, percentile=100.00%, depth=1 00:07:47.876 00:07:47.876 Run status group 0 (all jobs): 00:07:47.876 READ: bw=7463KiB/s (7642kB/s), 74.5KiB/s-4048KiB/s (76.3kB/s-4145kB/s), io=7612KiB (7795kB), run=1001-1020msec 00:07:47.876 WRITE: bw=11.8MiB/s (12.3MB/s), 2008KiB/s-4092KiB/s (2056kB/s-4190kB/s), io=12.0MiB (12.6MB), run=1001-1020msec 00:07:47.876 00:07:47.876 Disk stats (read/write): 00:07:47.876 nvme0n1: ios=56/512, merge=0/0, ticks=672/197, in_queue=869, util=86.97% 00:07:47.876 nvme0n2: ios=36/512, merge=0/0, ticks=1430/251, in_queue=1681, util=87.95% 00:07:47.876 nvme0n3: ios=648/1024, merge=0/0, ticks=775/448, in_queue=1223, util=92.06% 00:07:47.876 nvme0n4: ios=892/1024, merge=0/0, ticks=686/262, in_queue=948, util=97.42% 00:07:47.876 13:51:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:07:47.876 [global] 00:07:47.876 thread=1 00:07:47.876 invalidate=1 00:07:47.876 rw=randwrite 00:07:47.876 time_based=1 00:07:47.876 runtime=1 00:07:47.876 ioengine=libaio 00:07:47.876 direct=1 00:07:47.876 bs=4096 00:07:47.876 iodepth=1 00:07:47.876 norandommap=0 00:07:47.876 numjobs=1 00:07:47.876 00:07:47.876 verify_dump=1 00:07:47.876 verify_backlog=512 00:07:47.876 verify_state_save=0 00:07:47.876 do_verify=1 00:07:47.876 verify=crc32c-intel 00:07:47.876 [job0] 00:07:47.876 filename=/dev/nvme0n1 00:07:47.876 [job1] 00:07:47.876 filename=/dev/nvme0n2 00:07:47.876 [job2] 00:07:47.876 filename=/dev/nvme0n3 00:07:47.876 [job3] 00:07:47.876 filename=/dev/nvme0n4 00:07:47.876 Could not set queue depth (nvme0n1) 00:07:47.876 Could not set queue depth (nvme0n2) 00:07:47.876 Could not set queue depth (nvme0n3) 00:07:47.876 Could not set queue depth (nvme0n4) 00:07:48.136 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:07:48.136 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:07:48.136 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:07:48.136 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:07:48.136 fio-3.35 00:07:48.136 Starting 4 threads 00:07:49.546 00:07:49.546 job0: (groupid=0, jobs=1): err= 0: pid=704928: Wed Nov 6 13:51:28 2024 00:07:49.546 read: IOPS=17, BW=71.2KiB/s (72.9kB/s)(72.0KiB/1011msec) 00:07:49.546 slat (nsec): min=10619, max=25324, avg=21449.56, stdev=5844.89 00:07:49.546 clat (usec): min=1171, max=42987, avg=39914.14, stdev=9682.86 00:07:49.546 lat (usec): min=1181, max=43012, avg=39935.59, stdev=9685.55 00:07:49.546 clat percentiles (usec): 00:07:49.546 | 1.00th=[ 1172], 5.00th=[ 1172], 10.00th=[41157], 20.00th=[41681], 00:07:49.546 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:07:49.546 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[42730], 00:07:49.546 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:07:49.546 | 99.99th=[42730] 00:07:49.546 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:07:49.546 slat (nsec): min=3932, max=54090, avg=13518.02, stdev=6159.31 00:07:49.546 clat (usec): min=187, max=941, avg=553.53, stdev=129.92 00:07:49.546 lat (usec): min=192, max=954, avg=567.05, stdev=131.35 00:07:49.546 clat percentiles (usec): 00:07:49.546 | 1.00th=[ 210], 5.00th=[ 318], 10.00th=[ 396], 20.00th=[ 449], 00:07:49.546 | 30.00th=[ 490], 40.00th=[ 529], 50.00th=[ 553], 60.00th=[ 586], 00:07:49.546 | 70.00th=[ 635], 80.00th=[ 668], 90.00th=[ 717], 95.00th=[ 758], 00:07:49.546 | 99.00th=[ 816], 99.50th=[ 824], 99.90th=[ 938], 99.95th=[ 938], 00:07:49.546 | 99.99th=[ 938] 00:07:49.546 bw ( KiB/s): min= 4096, max= 4096, per=33.80%, avg=4096.00, stdev= 0.00, samples=1 00:07:49.546 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:07:49.546 lat (usec) : 250=1.89%, 500=29.06%, 750=59.81%, 1000=5.85% 00:07:49.546 lat (msec) : 2=0.19%, 50=3.21% 00:07:49.546 cpu : usr=0.30%, sys=0.69%, ctx=530, majf=0, minf=1 00:07:49.546 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:07:49.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:49.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:49.546 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:49.546 latency : target=0, window=0, percentile=100.00%, depth=1 00:07:49.546 job1: (groupid=0, jobs=1): err= 0: pid=704929: Wed Nov 6 13:51:28 2024 00:07:49.546 read: IOPS=754, BW=3017KiB/s (3089kB/s)(3020KiB/1001msec) 00:07:49.546 slat (nsec): min=2509, max=44780, avg=10924.97, stdev=5143.06 00:07:49.546 clat (usec): min=154, max=931, avg=639.25, stdev=111.89 00:07:49.547 lat (usec): min=157, max=942, avg=650.17, stdev=113.62 00:07:49.547 clat percentiles (usec): 00:07:49.547 | 1.00th=[ 351], 5.00th=[ 449], 10.00th=[ 486], 20.00th=[ 545], 00:07:49.547 | 30.00th=[ 586], 40.00th=[ 619], 50.00th=[ 644], 60.00th=[ 676], 00:07:49.547 | 70.00th=[ 709], 80.00th=[ 734], 90.00th=[ 766], 95.00th=[ 799], 00:07:49.547 | 99.00th=[ 865], 99.50th=[ 889], 99.90th=[ 930], 99.95th=[ 930], 00:07:49.547 | 99.99th=[ 930] 00:07:49.547 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:07:49.547 slat (nsec): min=3442, max=49050, avg=13348.15, stdev=5371.91 00:07:49.547 clat (usec): min=129, max=855, avg=474.61, stdev=123.55 00:07:49.547 lat (usec): min=133, max=869, avg=487.96, stdev=125.09 00:07:49.547 clat percentiles (usec): 00:07:49.547 | 1.00th=[ 208], 5.00th=[ 273], 10.00th=[ 318], 20.00th=[ 363], 00:07:49.547 | 30.00th=[ 408], 40.00th=[ 445], 50.00th=[ 478], 60.00th=[ 506], 00:07:49.547 | 70.00th=[ 537], 80.00th=[ 578], 90.00th=[ 644], 95.00th=[ 685], 00:07:49.547 | 99.00th=[ 783], 99.50th=[ 807], 99.90th=[ 857], 99.95th=[ 857], 00:07:49.547 | 99.99th=[ 857] 00:07:49.547 bw ( KiB/s): min= 4096, max= 4096, per=33.80%, avg=4096.00, stdev= 0.00, samples=1 00:07:49.547 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:07:49.547 lat (usec) : 250=2.14%, 500=36.65%, 750=53.85%, 1000=7.36% 00:07:49.547 cpu : usr=1.30%, sys=4.00%, ctx=1780, majf=0, minf=1 00:07:49.547 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:07:49.547 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:49.547 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:49.547 issued rwts: total=755,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:49.547 latency : target=0, window=0, percentile=100.00%, depth=1 00:07:49.547 job2: (groupid=0, jobs=1): err= 0: pid=704930: Wed Nov 6 13:51:28 2024 00:07:49.547 read: IOPS=800, BW=3201KiB/s (3278kB/s)(3204KiB/1001msec) 00:07:49.547 slat (nsec): min=2557, max=43484, avg=11474.28, stdev=5340.57 00:07:49.547 clat (usec): min=329, max=893, avg=641.11, stdev=110.31 00:07:49.547 lat (usec): min=332, max=921, avg=652.58, stdev=111.99 00:07:49.547 clat percentiles (usec): 00:07:49.547 | 1.00th=[ 392], 5.00th=[ 453], 10.00th=[ 494], 20.00th=[ 545], 00:07:49.547 | 30.00th=[ 578], 40.00th=[ 619], 50.00th=[ 644], 60.00th=[ 676], 00:07:49.547 | 70.00th=[ 717], 80.00th=[ 742], 90.00th=[ 783], 95.00th=[ 807], 00:07:49.547 | 99.00th=[ 840], 99.50th=[ 857], 99.90th=[ 898], 99.95th=[ 898], 00:07:49.547 | 99.99th=[ 898] 00:07:49.547 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:07:49.547 slat (nsec): min=3453, max=64757, avg=14414.61, stdev=5831.23 00:07:49.547 clat (usec): min=107, max=764, avg=442.13, stdev=118.40 00:07:49.547 lat (usec): min=110, max=800, avg=456.54, stdev=120.62 00:07:49.547 clat percentiles (usec): 00:07:49.547 | 1.00th=[ 190], 5.00th=[ 245], 10.00th=[ 293], 20.00th=[ 338], 00:07:49.547 | 30.00th=[ 375], 40.00th=[ 408], 50.00th=[ 441], 60.00th=[ 469], 00:07:49.547 | 70.00th=[ 502], 80.00th=[ 553], 90.00th=[ 603], 95.00th=[ 627], 00:07:49.547 | 99.00th=[ 693], 99.50th=[ 734], 99.90th=[ 750], 99.95th=[ 766], 00:07:49.547 | 99.99th=[ 766] 00:07:49.547 bw ( KiB/s): min= 4096, max= 4096, per=33.80%, avg=4096.00, stdev= 0.00, samples=1 00:07:49.547 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:07:49.547 lat (usec) : 250=3.23%, 500=40.38%, 750=48.11%, 1000=8.27% 00:07:49.547 cpu : usr=2.20%, sys=3.50%, ctx=1827, majf=0, minf=1 00:07:49.547 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:07:49.547 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:49.547 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:49.547 issued rwts: total=801,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:49.547 latency : target=0, window=0, percentile=100.00%, depth=1 00:07:49.547 job3: (groupid=0, jobs=1): err= 0: pid=704931: Wed Nov 6 13:51:28 2024 00:07:49.547 read: IOPS=26, BW=107KiB/s (109kB/s)(108KiB/1014msec) 00:07:49.547 slat (nsec): min=11276, max=30440, avg=23981.15, stdev=5661.55 00:07:49.547 clat (usec): min=849, max=42995, avg=29643.11, stdev=18947.52 00:07:49.547 lat (usec): min=875, max=43021, avg=29667.09, stdev=18945.64 00:07:49.547 clat percentiles (usec): 00:07:49.547 | 1.00th=[ 848], 5.00th=[ 963], 10.00th=[ 971], 20.00th=[ 1057], 00:07:49.547 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:07:49.547 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:07:49.547 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:07:49.547 | 99.99th=[43254] 00:07:49.547 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:07:49.547 slat (nsec): min=4109, max=46923, avg=12961.62, stdev=7166.57 00:07:49.547 clat (usec): min=223, max=615, avg=392.35, stdev=66.99 00:07:49.547 lat (usec): min=228, max=630, avg=405.31, stdev=69.74 00:07:49.547 clat percentiles (usec): 00:07:49.547 | 1.00th=[ 243], 5.00th=[ 281], 10.00th=[ 302], 20.00th=[ 322], 00:07:49.547 | 30.00th=[ 347], 40.00th=[ 383], 50.00th=[ 408], 60.00th=[ 416], 00:07:49.547 | 70.00th=[ 433], 80.00th=[ 449], 90.00th=[ 465], 95.00th=[ 490], 00:07:49.547 | 99.00th=[ 537], 99.50th=[ 570], 99.90th=[ 619], 99.95th=[ 619], 00:07:49.547 | 99.99th=[ 619] 00:07:49.547 bw ( KiB/s): min= 4096, max= 4096, per=33.80%, avg=4096.00, stdev= 0.00, samples=1 00:07:49.547 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:07:49.547 lat (usec) : 250=1.48%, 500=90.35%, 750=3.15%, 1000=0.74% 00:07:49.547 lat (msec) : 2=0.74%, 50=3.53% 00:07:49.547 cpu : usr=0.20%, sys=0.69%, ctx=540, majf=0, minf=1 00:07:49.547 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:07:49.547 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:49.547 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:49.547 issued rwts: total=27,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:49.547 latency : target=0, window=0, percentile=100.00%, depth=1 00:07:49.547 00:07:49.547 Run status group 0 (all jobs): 00:07:49.547 READ: bw=6316KiB/s (6467kB/s), 71.2KiB/s-3201KiB/s (72.9kB/s-3278kB/s), io=6404KiB (6558kB), run=1001-1014msec 00:07:49.547 WRITE: bw=11.8MiB/s (12.4MB/s), 2020KiB/s-4092KiB/s (2068kB/s-4190kB/s), io=12.0MiB (12.6MB), run=1001-1014msec 00:07:49.547 00:07:49.547 Disk stats (read/write): 00:07:49.547 nvme0n1: ios=64/512, merge=0/0, ticks=603/277, in_queue=880, util=89.08% 00:07:49.547 nvme0n2: ios=560/1024, merge=0/0, ticks=1251/378, in_queue=1629, util=98.27% 00:07:49.547 nvme0n3: ios=600/1024, merge=0/0, ticks=1271/298, in_queue=1569, util=99.06% 00:07:49.547 nvme0n4: ios=46/512, merge=0/0, ticks=1602/198, in_queue=1800, util=98.64% 00:07:49.547 13:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:07:49.547 [global] 00:07:49.547 thread=1 00:07:49.547 invalidate=1 00:07:49.547 rw=write 00:07:49.547 time_based=1 00:07:49.547 runtime=1 00:07:49.547 ioengine=libaio 00:07:49.547 direct=1 00:07:49.547 bs=4096 00:07:49.547 iodepth=128 00:07:49.547 norandommap=0 00:07:49.547 numjobs=1 00:07:49.547 00:07:49.547 verify_dump=1 00:07:49.547 verify_backlog=512 00:07:49.547 verify_state_save=0 00:07:49.547 do_verify=1 00:07:49.547 verify=crc32c-intel 00:07:49.547 [job0] 00:07:49.547 filename=/dev/nvme0n1 00:07:49.547 [job1] 00:07:49.547 filename=/dev/nvme0n2 00:07:49.547 [job2] 00:07:49.547 filename=/dev/nvme0n3 00:07:49.547 [job3] 00:07:49.547 filename=/dev/nvme0n4 00:07:49.547 Could not set queue depth (nvme0n1) 00:07:49.547 Could not set queue depth (nvme0n2) 00:07:49.547 Could not set queue depth (nvme0n3) 00:07:49.547 Could not set queue depth (nvme0n4) 00:07:49.806 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:07:49.806 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:07:49.806 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:07:49.806 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:07:49.806 fio-3.35 00:07:49.806 Starting 4 threads 00:07:51.186 00:07:51.186 job0: (groupid=0, jobs=1): err= 0: pid=705458: Wed Nov 6 13:51:30 2024 00:07:51.186 read: IOPS=2019, BW=8079KiB/s (8273kB/s)(8192KiB/1014msec) 00:07:51.186 slat (nsec): min=986, max=15662k, avg=144811.95, stdev=943562.31 00:07:51.186 clat (usec): min=1257, max=99175, avg=15903.85, stdev=14935.79 00:07:51.186 lat (usec): min=1259, max=99182, avg=16048.67, stdev=15071.95 00:07:51.186 clat percentiles (usec): 00:07:51.186 | 1.00th=[ 2474], 5.00th=[ 4113], 10.00th=[ 6521], 20.00th=[ 7832], 00:07:51.186 | 30.00th=[ 8160], 40.00th=[11076], 50.00th=[11469], 60.00th=[12649], 00:07:51.186 | 70.00th=[16581], 80.00th=[17957], 90.00th=[24511], 95.00th=[46924], 00:07:51.186 | 99.00th=[89654], 99.50th=[95945], 99.90th=[99091], 99.95th=[99091], 00:07:51.186 | 99.99th=[99091] 00:07:51.186 write: IOPS=2349, BW=9396KiB/s (9622kB/s)(9528KiB/1014msec); 0 zone resets 00:07:51.186 slat (nsec): min=1746, max=11683k, avg=259117.26, stdev=1145250.89 00:07:51.186 clat (usec): min=538, max=106133, avg=40362.66, stdev=30885.27 00:07:51.186 lat (usec): min=542, max=106166, avg=40621.77, stdev=31088.11 00:07:51.186 clat percentiles (usec): 00:07:51.186 | 1.00th=[ 963], 5.00th=[ 1876], 10.00th=[ 2638], 20.00th=[ 7242], 00:07:51.186 | 30.00th=[ 13304], 40.00th=[ 22414], 50.00th=[ 39060], 60.00th=[ 50070], 00:07:51.186 | 70.00th=[ 64226], 80.00th=[ 70779], 90.00th=[ 82314], 95.00th=[ 92799], 00:07:51.186 | 99.00th=[102237], 99.50th=[105382], 99.90th=[105382], 99.95th=[105382], 00:07:51.186 | 99.99th=[106431] 00:07:51.186 bw ( KiB/s): min= 5448, max=12566, per=10.03%, avg=9007.00, stdev=5033.19, samples=2 00:07:51.186 iops : min= 1362, max= 3141, avg=2251.50, stdev=1257.94, samples=2 00:07:51.186 lat (usec) : 750=0.11%, 1000=0.54% 00:07:51.186 lat (msec) : 2=3.14%, 4=4.65%, 10=22.44%, 20=27.45%, 50=17.81% 00:07:51.186 lat (msec) : 100=23.05%, 250=0.81% 00:07:51.186 cpu : usr=1.68%, sys=1.97%, ctx=289, majf=0, minf=1 00:07:51.186 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:07:51.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:51.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:07:51.186 issued rwts: total=2048,2382,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:51.186 latency : target=0, window=0, percentile=100.00%, depth=128 00:07:51.186 job1: (groupid=0, jobs=1): err= 0: pid=705459: Wed Nov 6 13:51:30 2024 00:07:51.186 read: IOPS=4035, BW=15.8MiB/s (16.5MB/s)(16.0MiB/1015msec) 00:07:51.186 slat (nsec): min=994, max=11975k, avg=83927.20, stdev=650264.10 00:07:51.187 clat (usec): min=779, max=95447, avg=10604.28, stdev=10334.54 00:07:51.187 lat (usec): min=794, max=95451, avg=10688.21, stdev=10416.10 00:07:51.187 clat percentiles (usec): 00:07:51.187 | 1.00th=[ 1303], 5.00th=[ 1745], 10.00th=[ 2638], 20.00th=[ 6718], 00:07:51.187 | 30.00th=[ 7046], 40.00th=[ 7439], 50.00th=[ 7832], 60.00th=[ 9503], 00:07:51.187 | 70.00th=[11863], 80.00th=[12780], 90.00th=[14615], 95.00th=[19792], 00:07:51.187 | 99.00th=[69731], 99.50th=[88605], 99.90th=[95945], 99.95th=[95945], 00:07:51.187 | 99.99th=[95945] 00:07:51.187 write: IOPS=5302, BW=20.7MiB/s (21.7MB/s)(21.0MiB/1015msec); 0 zone resets 00:07:51.187 slat (nsec): min=1650, max=11899k, avg=97805.77, stdev=650564.35 00:07:51.187 clat (usec): min=435, max=85096, avg=15848.68, stdev=20266.41 00:07:51.187 lat (usec): min=437, max=85105, avg=15946.49, stdev=20402.40 00:07:51.187 clat percentiles (usec): 00:07:51.187 | 1.00th=[ 1614], 5.00th=[ 3294], 10.00th=[ 3982], 20.00th=[ 5211], 00:07:51.187 | 30.00th=[ 5932], 40.00th=[ 6456], 50.00th=[ 6915], 60.00th=[ 7767], 00:07:51.187 | 70.00th=[ 9110], 80.00th=[19530], 90.00th=[52167], 95.00th=[70779], 00:07:51.187 | 99.00th=[80217], 99.50th=[81265], 99.90th=[85459], 99.95th=[85459], 00:07:51.187 | 99.99th=[85459] 00:07:51.187 bw ( KiB/s): min=17285, max=24720, per=23.40%, avg=21002.50, stdev=5257.34, samples=2 00:07:51.187 iops : min= 4321, max= 6180, avg=5250.50, stdev=1314.51, samples=2 00:07:51.187 lat (usec) : 500=0.07%, 1000=0.09% 00:07:51.187 lat (msec) : 2=3.10%, 4=8.30%, 10=55.30%, 20=19.87%, 50=6.49% 00:07:51.187 lat (msec) : 100=6.77% 00:07:51.187 cpu : usr=2.66%, sys=3.65%, ctx=453, majf=0, minf=2 00:07:51.187 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:07:51.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:51.187 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:07:51.187 issued rwts: total=4096,5382,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:51.187 latency : target=0, window=0, percentile=100.00%, depth=128 00:07:51.187 job2: (groupid=0, jobs=1): err= 0: pid=705460: Wed Nov 6 13:51:30 2024 00:07:51.187 read: IOPS=6053, BW=23.6MiB/s (24.8MB/s)(24.0MiB/1015msec) 00:07:51.187 slat (nsec): min=988, max=13040k, avg=80608.43, stdev=630078.19 00:07:51.187 clat (usec): min=3906, max=25655, avg=10844.11, stdev=3642.82 00:07:51.187 lat (usec): min=3909, max=25686, avg=10924.72, stdev=3687.45 00:07:51.187 clat percentiles (usec): 00:07:51.187 | 1.00th=[ 5932], 5.00th=[ 6521], 10.00th=[ 7242], 20.00th=[ 7635], 00:07:51.187 | 30.00th=[ 8225], 40.00th=[ 9110], 50.00th=[10028], 60.00th=[10945], 00:07:51.187 | 70.00th=[12125], 80.00th=[13829], 90.00th=[16909], 95.00th=[17957], 00:07:51.187 | 99.00th=[21103], 99.50th=[21627], 99.90th=[23200], 99.95th=[25035], 00:07:51.187 | 99.99th=[25560] 00:07:51.187 write: IOPS=6216, BW=24.3MiB/s (25.5MB/s)(24.6MiB/1015msec); 0 zone resets 00:07:51.187 slat (nsec): min=1677, max=10540k, avg=76880.91, stdev=581085.61 00:07:51.187 clat (usec): min=2440, max=50606, avg=9842.87, stdev=6918.67 00:07:51.187 lat (usec): min=2444, max=50609, avg=9919.75, stdev=6961.55 00:07:51.187 clat percentiles (usec): 00:07:51.187 | 1.00th=[ 4113], 5.00th=[ 4752], 10.00th=[ 5276], 20.00th=[ 6128], 00:07:51.187 | 30.00th=[ 6783], 40.00th=[ 7046], 50.00th=[ 7439], 60.00th=[ 8291], 00:07:51.187 | 70.00th=[ 8979], 80.00th=[11338], 90.00th=[17171], 95.00th=[25297], 00:07:51.187 | 99.00th=[41681], 99.50th=[46924], 99.90th=[50594], 99.95th=[50594], 00:07:51.187 | 99.99th=[50594] 00:07:51.187 bw ( KiB/s): min=24127, max=25288, per=27.52%, avg=24707.50, stdev=820.95, samples=2 00:07:51.187 iops : min= 6031, max= 6322, avg=6176.50, stdev=205.77, samples=2 00:07:51.187 lat (msec) : 4=0.39%, 10=61.82%, 20=32.97%, 50=4.71%, 100=0.11% 00:07:51.187 cpu : usr=3.65%, sys=3.06%, ctx=298, majf=0, minf=1 00:07:51.187 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:07:51.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:51.187 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:07:51.187 issued rwts: total=6144,6310,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:51.187 latency : target=0, window=0, percentile=100.00%, depth=128 00:07:51.187 job3: (groupid=0, jobs=1): err= 0: pid=705461: Wed Nov 6 13:51:30 2024 00:07:51.187 read: IOPS=8662, BW=33.8MiB/s (35.5MB/s)(34.0MiB/1004msec) 00:07:51.187 slat (nsec): min=974, max=14647k, avg=60215.77, stdev=509158.81 00:07:51.187 clat (usec): min=1694, max=39428, avg=7964.27, stdev=3411.26 00:07:51.187 lat (usec): min=1697, max=39455, avg=8024.49, stdev=3453.05 00:07:51.187 clat percentiles (usec): 00:07:51.187 | 1.00th=[ 2802], 5.00th=[ 4883], 10.00th=[ 5735], 20.00th=[ 6259], 00:07:51.187 | 30.00th=[ 6456], 40.00th=[ 6718], 50.00th=[ 7177], 60.00th=[ 7767], 00:07:51.187 | 70.00th=[ 8291], 80.00th=[ 8717], 90.00th=[10814], 95.00th=[12256], 00:07:51.187 | 99.00th=[24773], 99.50th=[27395], 99.90th=[27657], 99.95th=[27657], 00:07:51.187 | 99.99th=[39584] 00:07:51.187 write: IOPS=8669, BW=33.9MiB/s (35.5MB/s)(34.0MiB/1004msec); 0 zone resets 00:07:51.187 slat (nsec): min=1674, max=9973.3k, avg=46214.12, stdev=307050.21 00:07:51.187 clat (usec): min=582, max=17418, avg=6667.15, stdev=2047.46 00:07:51.187 lat (usec): min=587, max=17925, avg=6713.36, stdev=2070.21 00:07:51.187 clat percentiles (usec): 00:07:51.187 | 1.00th=[ 1467], 5.00th=[ 2999], 10.00th=[ 4080], 20.00th=[ 5276], 00:07:51.187 | 30.00th=[ 6194], 40.00th=[ 6521], 50.00th=[ 6718], 60.00th=[ 6915], 00:07:51.187 | 70.00th=[ 7439], 80.00th=[ 8029], 90.00th=[ 8455], 95.00th=[ 9503], 00:07:51.187 | 99.00th=[13960], 99.50th=[16188], 99.90th=[16909], 99.95th=[17433], 00:07:51.187 | 99.99th=[17433] 00:07:51.187 bw ( KiB/s): min=32768, max=36864, per=38.79%, avg=34816.00, stdev=2896.31, samples=2 00:07:51.187 iops : min= 8192, max= 9216, avg=8704.00, stdev=724.08, samples=2 00:07:51.187 lat (usec) : 750=0.02%, 1000=0.06% 00:07:51.187 lat (msec) : 2=0.94%, 4=4.63%, 10=86.36%, 20=6.74%, 50=1.26% 00:07:51.187 cpu : usr=3.49%, sys=6.48%, ctx=820, majf=0, minf=1 00:07:51.187 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:07:51.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:51.187 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:07:51.187 issued rwts: total=8697,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:51.187 latency : target=0, window=0, percentile=100.00%, depth=128 00:07:51.187 00:07:51.187 Run status group 0 (all jobs): 00:07:51.187 READ: bw=80.8MiB/s (84.7MB/s), 8079KiB/s-33.8MiB/s (8273kB/s-35.5MB/s), io=82.0MiB (86.0MB), run=1004-1015msec 00:07:51.187 WRITE: bw=87.7MiB/s (91.9MB/s), 9396KiB/s-33.9MiB/s (9622kB/s-35.5MB/s), io=89.0MiB (93.3MB), run=1004-1015msec 00:07:51.187 00:07:51.187 Disk stats (read/write): 00:07:51.187 nvme0n1: ios=1962/2048, merge=0/0, ticks=28526/76144, in_queue=104670, util=87.17% 00:07:51.187 nvme0n2: ios=3496/4608, merge=0/0, ticks=35134/70915, in_queue=106049, util=91.18% 00:07:51.187 nvme0n3: ios=5143/5434, merge=0/0, ticks=53518/51440, in_queue=104958, util=93.66% 00:07:51.187 nvme0n4: ios=7219/7327, merge=0/0, ticks=45994/36758, in_queue=82752, util=95.49% 00:07:51.187 13:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:07:51.187 [global] 00:07:51.187 thread=1 00:07:51.187 invalidate=1 00:07:51.187 rw=randwrite 00:07:51.187 time_based=1 00:07:51.187 runtime=1 00:07:51.187 ioengine=libaio 00:07:51.187 direct=1 00:07:51.187 bs=4096 00:07:51.187 iodepth=128 00:07:51.187 norandommap=0 00:07:51.187 numjobs=1 00:07:51.187 00:07:51.187 verify_dump=1 00:07:51.187 verify_backlog=512 00:07:51.187 verify_state_save=0 00:07:51.187 do_verify=1 00:07:51.187 verify=crc32c-intel 00:07:51.187 [job0] 00:07:51.187 filename=/dev/nvme0n1 00:07:51.187 [job1] 00:07:51.187 filename=/dev/nvme0n2 00:07:51.187 [job2] 00:07:51.187 filename=/dev/nvme0n3 00:07:51.187 [job3] 00:07:51.187 filename=/dev/nvme0n4 00:07:51.187 Could not set queue depth (nvme0n1) 00:07:51.187 Could not set queue depth (nvme0n2) 00:07:51.187 Could not set queue depth (nvme0n3) 00:07:51.187 Could not set queue depth (nvme0n4) 00:07:51.187 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:07:51.187 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:07:51.187 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:07:51.187 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:07:51.187 fio-3.35 00:07:51.187 Starting 4 threads 00:07:52.594 00:07:52.594 job0: (groupid=0, jobs=1): err= 0: pid=705978: Wed Nov 6 13:51:31 2024 00:07:52.594 read: IOPS=8322, BW=32.5MiB/s (34.1MB/s)(32.6MiB/1003msec) 00:07:52.594 slat (nsec): min=939, max=6492.9k, avg=60233.51, stdev=391601.13 00:07:52.594 clat (usec): min=2160, max=14435, avg=7369.75, stdev=1019.74 00:07:52.594 lat (usec): min=2162, max=14437, avg=7429.98, stdev=1066.92 00:07:52.594 clat percentiles (usec): 00:07:52.594 | 1.00th=[ 5014], 5.00th=[ 5800], 10.00th=[ 6128], 20.00th=[ 6849], 00:07:52.594 | 30.00th=[ 7111], 40.00th=[ 7242], 50.00th=[ 7373], 60.00th=[ 7439], 00:07:52.594 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8455], 95.00th=[ 9241], 00:07:52.594 | 99.00th=[10421], 99.50th=[10814], 99.90th=[11731], 99.95th=[12387], 00:07:52.594 | 99.99th=[14484] 00:07:52.594 write: IOPS=8677, BW=33.9MiB/s (35.5MB/s)(34.0MiB/1003msec); 0 zone resets 00:07:52.594 slat (nsec): min=1554, max=7959.4k, avg=53622.96, stdev=292909.81 00:07:52.594 clat (usec): min=2160, max=36560, avg=7535.08, stdev=3288.33 00:07:52.594 lat (usec): min=2164, max=36563, avg=7588.70, stdev=3308.63 00:07:52.594 clat percentiles (usec): 00:07:52.594 | 1.00th=[ 4424], 5.00th=[ 5407], 10.00th=[ 6259], 20.00th=[ 6652], 00:07:52.594 | 30.00th=[ 6849], 40.00th=[ 6980], 50.00th=[ 7046], 60.00th=[ 7111], 00:07:52.594 | 70.00th=[ 7242], 80.00th=[ 7373], 90.00th=[ 8356], 95.00th=[ 9634], 00:07:52.594 | 99.00th=[27395], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:07:52.594 | 99.99th=[36439] 00:07:52.594 bw ( KiB/s): min=34360, max=35272, per=34.25%, avg=34816.00, stdev=644.88, samples=2 00:07:52.594 iops : min= 8590, max= 8818, avg=8704.00, stdev=161.22, samples=2 00:07:52.594 lat (msec) : 4=0.47%, 10=96.49%, 20=2.25%, 50=0.79% 00:07:52.594 cpu : usr=3.79%, sys=4.89%, ctx=1079, majf=0, minf=1 00:07:52.594 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:07:52.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:52.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:07:52.594 issued rwts: total=8347,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:52.594 latency : target=0, window=0, percentile=100.00%, depth=128 00:07:52.594 job1: (groupid=0, jobs=1): err= 0: pid=705980: Wed Nov 6 13:51:31 2024 00:07:52.594 read: IOPS=4188, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1008msec) 00:07:52.594 slat (nsec): min=953, max=18538k, avg=110696.20, stdev=810417.57 00:07:52.594 clat (usec): min=2001, max=79858, avg=12011.64, stdev=8020.71 00:07:52.594 lat (usec): min=3823, max=79866, avg=12122.33, stdev=8120.15 00:07:52.594 clat percentiles (usec): 00:07:52.594 | 1.00th=[ 4948], 5.00th=[ 6390], 10.00th=[ 7046], 20.00th=[ 7767], 00:07:52.594 | 30.00th=[ 7963], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[10290], 00:07:52.594 | 70.00th=[11338], 80.00th=[14615], 90.00th=[19268], 95.00th=[27919], 00:07:52.594 | 99.00th=[47449], 99.50th=[56886], 99.90th=[80217], 99.95th=[80217], 00:07:52.594 | 99.99th=[80217] 00:07:52.594 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 00:07:52.594 slat (nsec): min=1655, max=19537k, avg=113168.31, stdev=739304.23 00:07:52.594 clat (usec): min=2191, max=79866, avg=16707.98, stdev=11402.23 00:07:52.594 lat (usec): min=2194, max=79912, avg=16821.15, stdev=11461.03 00:07:52.594 clat percentiles (usec): 00:07:52.594 | 1.00th=[ 3523], 5.00th=[ 5473], 10.00th=[ 7111], 20.00th=[ 8848], 00:07:52.594 | 30.00th=[10421], 40.00th=[12256], 50.00th=[15008], 60.00th=[16057], 00:07:52.594 | 70.00th=[18220], 80.00th=[21103], 90.00th=[26608], 95.00th=[35914], 00:07:52.594 | 99.00th=[69731], 99.50th=[73925], 99.90th=[76022], 99.95th=[76022], 00:07:52.594 | 99.99th=[80217] 00:07:52.594 bw ( KiB/s): min=16624, max=20224, per=18.13%, avg=18424.00, stdev=2545.58, samples=2 00:07:52.594 iops : min= 4156, max= 5056, avg=4606.00, stdev=636.40, samples=2 00:07:52.594 lat (msec) : 4=0.84%, 10=40.45%, 20=41.80%, 50=15.29%, 100=1.62% 00:07:52.594 cpu : usr=2.09%, sys=3.18%, ctx=414, majf=0, minf=1 00:07:52.594 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:07:52.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:52.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:07:52.594 issued rwts: total=4222,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:52.594 latency : target=0, window=0, percentile=100.00%, depth=128 00:07:52.594 job2: (groupid=0, jobs=1): err= 0: pid=705982: Wed Nov 6 13:51:31 2024 00:07:52.594 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:07:52.594 slat (nsec): min=959, max=15023k, avg=95372.61, stdev=769047.16 00:07:52.594 clat (usec): min=3533, max=53593, avg=14240.12, stdev=6876.06 00:07:52.594 lat (usec): min=3535, max=57038, avg=14335.50, stdev=6939.90 00:07:52.594 clat percentiles (usec): 00:07:52.594 | 1.00th=[ 4293], 5.00th=[ 7635], 10.00th=[ 8979], 20.00th=[ 9241], 00:07:52.594 | 30.00th=[ 9503], 40.00th=[10159], 50.00th=[13173], 60.00th=[14615], 00:07:52.594 | 70.00th=[16319], 80.00th=[18220], 90.00th=[20841], 95.00th=[23725], 00:07:52.594 | 99.00th=[43779], 99.50th=[44303], 99.90th=[53740], 99.95th=[53740], 00:07:52.594 | 99.99th=[53740] 00:07:52.594 write: IOPS=4076, BW=15.9MiB/s (16.7MB/s)(16.1MiB/1008msec); 0 zone resets 00:07:52.594 slat (nsec): min=1673, max=13507k, avg=121592.29, stdev=827979.46 00:07:52.594 clat (usec): min=851, max=79594, avg=16899.31, stdev=12832.69 00:07:52.594 lat (usec): min=1762, max=79629, avg=17020.90, stdev=12920.20 00:07:52.594 clat percentiles (usec): 00:07:52.594 | 1.00th=[ 2573], 5.00th=[ 4228], 10.00th=[ 5604], 20.00th=[ 7701], 00:07:52.594 | 30.00th=[ 8848], 40.00th=[11207], 50.00th=[13435], 60.00th=[15008], 00:07:52.594 | 70.00th=[16188], 80.00th=[26870], 90.00th=[33162], 95.00th=[38536], 00:07:52.594 | 99.00th=[69731], 99.50th=[77071], 99.90th=[78119], 99.95th=[78119], 00:07:52.594 | 99.99th=[79168] 00:07:52.594 bw ( KiB/s): min=15568, max=17200, per=16.12%, avg=16384.00, stdev=1154.00, samples=2 00:07:52.594 iops : min= 3892, max= 4300, avg=4096.00, stdev=288.50, samples=2 00:07:52.594 lat (usec) : 1000=0.01% 00:07:52.594 lat (msec) : 2=0.10%, 4=2.40%, 10=31.68%, 20=45.25%, 50=18.96% 00:07:52.594 lat (msec) : 100=1.60% 00:07:52.594 cpu : usr=1.49%, sys=3.87%, ctx=333, majf=0, minf=1 00:07:52.594 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:07:52.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:52.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:07:52.594 issued rwts: total=4096,4109,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:52.594 latency : target=0, window=0, percentile=100.00%, depth=128 00:07:52.594 job3: (groupid=0, jobs=1): err= 0: pid=705983: Wed Nov 6 13:51:31 2024 00:07:52.594 read: IOPS=8075, BW=31.5MiB/s (33.1MB/s)(31.7MiB/1005msec) 00:07:52.594 slat (nsec): min=913, max=7459.9k, avg=61751.84, stdev=493633.17 00:07:52.594 clat (usec): min=1896, max=15922, avg=8207.13, stdev=1808.39 00:07:52.594 lat (usec): min=2508, max=15948, avg=8268.89, stdev=1846.67 00:07:52.594 clat percentiles (usec): 00:07:52.594 | 1.00th=[ 3163], 5.00th=[ 5473], 10.00th=[ 6587], 20.00th=[ 7308], 00:07:52.594 | 30.00th=[ 7570], 40.00th=[ 7701], 50.00th=[ 7898], 60.00th=[ 8029], 00:07:52.594 | 70.00th=[ 8586], 80.00th=[ 9110], 90.00th=[10683], 95.00th=[11600], 00:07:52.594 | 99.00th=[13698], 99.50th=[14091], 99.90th=[14746], 99.95th=[14746], 00:07:52.594 | 99.99th=[15926] 00:07:52.594 write: IOPS=8151, BW=31.8MiB/s (33.4MB/s)(32.0MiB/1005msec); 0 zone resets 00:07:52.594 slat (nsec): min=1521, max=11706k, avg=54104.10, stdev=439478.59 00:07:52.594 clat (usec): min=816, max=22023, avg=7437.96, stdev=2316.13 00:07:52.594 lat (usec): min=824, max=23285, avg=7492.07, stdev=2344.30 00:07:52.594 clat percentiles (usec): 00:07:52.594 | 1.00th=[ 2311], 5.00th=[ 4293], 10.00th=[ 4883], 20.00th=[ 5997], 00:07:52.594 | 30.00th=[ 6783], 40.00th=[ 7308], 50.00th=[ 7504], 60.00th=[ 7767], 00:07:52.594 | 70.00th=[ 7898], 80.00th=[ 8160], 90.00th=[10421], 95.00th=[11207], 00:07:52.594 | 99.00th=[14615], 99.50th=[20317], 99.90th=[21627], 99.95th=[21627], 00:07:52.594 | 99.99th=[22152] 00:07:52.594 bw ( KiB/s): min=32768, max=32768, per=32.24%, avg=32768.00, stdev= 0.00, samples=2 00:07:52.594 iops : min= 8192, max= 8192, avg=8192.00, stdev= 0.00, samples=2 00:07:52.594 lat (usec) : 1000=0.10% 00:07:52.594 lat (msec) : 2=0.20%, 4=3.32%, 10=82.97%, 20=13.10%, 50=0.31% 00:07:52.594 cpu : usr=2.79%, sys=5.38%, ctx=553, majf=0, minf=2 00:07:52.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:07:52.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:52.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:07:52.595 issued rwts: total=8116,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:52.595 latency : target=0, window=0, percentile=100.00%, depth=128 00:07:52.595 00:07:52.595 Run status group 0 (all jobs): 00:07:52.595 READ: bw=96.0MiB/s (101MB/s), 15.9MiB/s-32.5MiB/s (16.6MB/s-34.1MB/s), io=96.8MiB (102MB), run=1003-1008msec 00:07:52.595 WRITE: bw=99.3MiB/s (104MB/s), 15.9MiB/s-33.9MiB/s (16.7MB/s-35.5MB/s), io=100MiB (105MB), run=1003-1008msec 00:07:52.595 00:07:52.595 Disk stats (read/write): 00:07:52.595 nvme0n1: ios=7014/7168, merge=0/0, ticks=28899/29899, in_queue=58798, util=87.27% 00:07:52.595 nvme0n2: ios=3332/3584, merge=0/0, ticks=41172/61789, in_queue=102961, util=88.18% 00:07:52.595 nvme0n3: ios=3541/3584, merge=0/0, ticks=41260/56934, in_queue=98194, util=92.41% 00:07:52.595 nvme0n4: ios=6713/6771, merge=0/0, ticks=53691/47795, in_queue=101486, util=96.91% 00:07:52.595 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:07:52.595 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=706315 00:07:52.595 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:07:52.595 13:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:07:52.595 [global] 00:07:52.595 thread=1 00:07:52.595 invalidate=1 00:07:52.595 rw=read 00:07:52.595 time_based=1 00:07:52.595 runtime=10 00:07:52.595 ioengine=libaio 00:07:52.595 direct=1 00:07:52.595 bs=4096 00:07:52.595 iodepth=1 00:07:52.595 norandommap=1 00:07:52.595 numjobs=1 00:07:52.595 00:07:52.595 [job0] 00:07:52.595 filename=/dev/nvme0n1 00:07:52.595 [job1] 00:07:52.595 filename=/dev/nvme0n2 00:07:52.595 [job2] 00:07:52.595 filename=/dev/nvme0n3 00:07:52.595 [job3] 00:07:52.595 filename=/dev/nvme0n4 00:07:52.595 Could not set queue depth (nvme0n1) 00:07:52.595 Could not set queue depth (nvme0n2) 00:07:52.595 Could not set queue depth (nvme0n3) 00:07:52.595 Could not set queue depth (nvme0n4) 00:07:52.858 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:07:52.858 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:07:52.858 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:07:52.858 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:07:52.858 fio-3.35 00:07:52.858 Starting 4 threads 00:07:55.402 13:51:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:07:55.662 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=10809344, buflen=4096 00:07:55.662 fio: pid=706510, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:07:55.662 13:51:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:07:55.662 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=5459968, buflen=4096 00:07:55.662 fio: pid=706509, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:07:55.662 13:51:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:07:55.663 13:51:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:07:55.923 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=12967936, buflen=4096 00:07:55.923 fio: pid=706505, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:07:55.923 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:07:55.923 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:07:56.183 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:07:56.183 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:07:56.183 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=978944, buflen=4096 00:07:56.183 fio: pid=706506, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:07:56.183 00:07:56.183 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=706505: Wed Nov 6 13:51:35 2024 00:07:56.183 read: IOPS=1068, BW=4274KiB/s (4377kB/s)(12.4MiB/2963msec) 00:07:56.183 slat (usec): min=2, max=8398, avg=21.10, stdev=193.37 00:07:56.183 clat (usec): min=462, max=1239, avg=910.65, stdev=81.46 00:07:56.183 lat (usec): min=475, max=9435, avg=931.75, stdev=211.75 00:07:56.183 clat percentiles (usec): 00:07:56.183 | 1.00th=[ 685], 5.00th=[ 766], 10.00th=[ 799], 20.00th=[ 848], 00:07:56.183 | 30.00th=[ 881], 40.00th=[ 906], 50.00th=[ 922], 60.00th=[ 938], 00:07:56.183 | 70.00th=[ 955], 80.00th=[ 979], 90.00th=[ 1004], 95.00th=[ 1029], 00:07:56.183 | 99.00th=[ 1074], 99.50th=[ 1090], 99.90th=[ 1172], 99.95th=[ 1221], 00:07:56.183 | 99.99th=[ 1237] 00:07:56.183 bw ( KiB/s): min= 4199, max= 4328, per=45.37%, avg=4247.80, stdev=55.64, samples=5 00:07:56.183 iops : min= 1049, max= 1082, avg=1061.80, stdev=14.08, samples=5 00:07:56.183 lat (usec) : 500=0.06%, 750=3.63%, 1000=85.19% 00:07:56.183 lat (msec) : 2=11.08% 00:07:56.183 cpu : usr=1.28%, sys=2.87%, ctx=3169, majf=0, minf=1 00:07:56.183 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:07:56.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:56.183 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:56.183 issued rwts: total=3167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:56.183 latency : target=0, window=0, percentile=100.00%, depth=1 00:07:56.183 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=706506: Wed Nov 6 13:51:35 2024 00:07:56.183 read: IOPS=76, BW=303KiB/s (311kB/s)(956KiB/3152msec) 00:07:56.183 slat (usec): min=2, max=5647, avg=56.15, stdev=433.11 00:07:56.183 clat (usec): min=483, max=43010, avg=13120.86, stdev=18872.89 00:07:56.183 lat (usec): min=495, max=46984, avg=13177.13, stdev=18940.71 00:07:56.183 clat percentiles (usec): 00:07:56.183 | 1.00th=[ 498], 5.00th=[ 611], 10.00th=[ 668], 20.00th=[ 742], 00:07:56.183 | 30.00th=[ 807], 40.00th=[ 889], 50.00th=[ 938], 60.00th=[ 996], 00:07:56.183 | 70.00th=[ 8717], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:07:56.183 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:07:56.183 | 99.99th=[43254] 00:07:56.183 bw ( KiB/s): min= 88, max= 1408, per=3.34%, avg=313.17, stdev=536.37, samples=6 00:07:56.183 iops : min= 22, max= 352, avg=78.17, stdev=134.15, samples=6 00:07:56.183 lat (usec) : 500=1.25%, 750=20.42%, 1000=38.75% 00:07:56.183 lat (msec) : 2=9.17%, 10=0.42%, 50=29.58% 00:07:56.183 cpu : usr=0.16%, sys=0.16%, ctx=242, majf=0, minf=2 00:07:56.183 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:07:56.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:56.183 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:56.183 issued rwts: total=240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:56.183 latency : target=0, window=0, percentile=100.00%, depth=1 00:07:56.183 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=706509: Wed Nov 6 13:51:35 2024 00:07:56.183 read: IOPS=474, BW=1898KiB/s (1943kB/s)(5332KiB/2810msec) 00:07:56.183 slat (nsec): min=2573, max=45672, avg=16028.77, stdev=6211.33 00:07:56.183 clat (usec): min=394, max=43015, avg=2086.97, stdev=6855.04 00:07:56.183 lat (usec): min=405, max=43040, avg=2102.99, stdev=6856.31 00:07:56.183 clat percentiles (usec): 00:07:56.183 | 1.00th=[ 644], 5.00th=[ 725], 10.00th=[ 783], 20.00th=[ 840], 00:07:56.183 | 30.00th=[ 873], 40.00th=[ 906], 50.00th=[ 930], 60.00th=[ 955], 00:07:56.183 | 70.00th=[ 971], 80.00th=[ 996], 90.00th=[ 1037], 95.00th=[ 1090], 00:07:56.183 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:07:56.183 | 99.99th=[43254] 00:07:56.183 bw ( KiB/s): min= 96, max= 4319, per=22.62%, avg=2118.20, stdev=2039.51, samples=5 00:07:56.183 iops : min= 24, max= 1079, avg=529.40, stdev=509.67, samples=5 00:07:56.183 lat (usec) : 500=0.15%, 750=6.22%, 1000=74.44% 00:07:56.183 lat (msec) : 2=16.19%, 4=0.07%, 50=2.85% 00:07:56.183 cpu : usr=0.50%, sys=0.96%, ctx=1334, majf=0, minf=2 00:07:56.183 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:07:56.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:56.183 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:56.183 issued rwts: total=1334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:56.183 latency : target=0, window=0, percentile=100.00%, depth=1 00:07:56.183 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=706510: Wed Nov 6 13:51:35 2024 00:07:56.183 read: IOPS=996, BW=3983KiB/s (4079kB/s)(10.3MiB/2650msec) 00:07:56.183 slat (nsec): min=2503, max=53674, avg=17495.86, stdev=6256.27 00:07:56.183 clat (usec): min=437, max=1566, avg=981.92, stdev=193.46 00:07:56.183 lat (usec): min=453, max=1581, avg=999.42, stdev=196.17 00:07:56.183 clat percentiles (usec): 00:07:56.183 | 1.00th=[ 570], 5.00th=[ 627], 10.00th=[ 693], 20.00th=[ 791], 00:07:56.183 | 30.00th=[ 898], 40.00th=[ 963], 50.00th=[ 1012], 60.00th=[ 1057], 00:07:56.183 | 70.00th=[ 1090], 80.00th=[ 1139], 90.00th=[ 1221], 95.00th=[ 1287], 00:07:56.183 | 99.00th=[ 1369], 99.50th=[ 1385], 99.90th=[ 1500], 99.95th=[ 1500], 00:07:56.183 | 99.99th=[ 1565] 00:07:56.183 bw ( KiB/s): min= 3488, max= 4640, per=42.67%, avg=3995.20, stdev=587.70, samples=5 00:07:56.184 iops : min= 872, max= 1160, avg=998.80, stdev=146.93, samples=5 00:07:56.184 lat (usec) : 500=0.23%, 750=15.87%, 1000=31.17% 00:07:56.184 lat (msec) : 2=52.69% 00:07:56.184 cpu : usr=0.98%, sys=3.06%, ctx=2640, majf=0, minf=2 00:07:56.184 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:07:56.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:56.184 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:56.184 issued rwts: total=2640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:56.184 latency : target=0, window=0, percentile=100.00%, depth=1 00:07:56.184 00:07:56.184 Run status group 0 (all jobs): 00:07:56.184 READ: bw=9362KiB/s (9586kB/s), 303KiB/s-4274KiB/s (311kB/s-4377kB/s), io=28.8MiB (30.2MB), run=2650-3152msec 00:07:56.184 00:07:56.184 Disk stats (read/write): 00:07:56.184 nvme0n1: ios=3036/0, merge=0/0, ticks=2427/0, in_queue=2427, util=94.72% 00:07:56.184 nvme0n2: ios=237/0, merge=0/0, ticks=3035/0, in_queue=3035, util=95.48% 00:07:56.184 nvme0n3: ios=1327/0, merge=0/0, ticks=2440/0, in_queue=2440, util=95.99% 00:07:56.184 nvme0n4: ios=2577/0, merge=0/0, ticks=2327/0, in_queue=2327, util=96.42% 00:07:56.184 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:07:56.184 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:07:56.444 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:07:56.444 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:07:56.703 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:07:56.703 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:07:56.703 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:07:56.703 13:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:07:56.963 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:07:56.963 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 706315 00:07:56.963 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:07:56.963 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:56.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:56.963 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:56.963 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:07:56.963 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:07:56.963 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:56.963 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:56.963 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:07:56.963 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:07:56.963 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:07:56.963 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:07:56.963 nvmf hotplug test: fio failed as expected 00:07:56.963 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:57.222 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:07:57.222 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:07:57.223 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:07:57.223 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:07:57.223 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:07:57.223 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:57.223 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:07:57.223 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:57.223 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:07:57.223 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:57.223 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:57.223 rmmod nvme_tcp 00:07:57.223 rmmod nvme_fabrics 00:07:57.223 rmmod nvme_keyring 00:07:57.223 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:57.223 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:07:57.223 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:07:57.223 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 702487 ']' 00:07:57.223 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 702487 00:07:57.223 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 702487 ']' 00:07:57.223 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 702487 00:07:57.223 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:07:57.223 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:57.223 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 702487 00:07:57.223 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:57.223 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:57.223 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 702487' 00:07:57.223 killing process with pid 702487 00:07:57.223 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 702487 00:07:57.223 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 702487 00:07:57.482 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:57.482 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:57.482 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:57.482 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:07:57.482 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:07:57.482 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:57.482 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:07:57.482 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:57.482 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:57.482 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.482 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.482 13:51:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.388 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:59.388 00:07:59.388 real 0m25.631s 00:07:59.388 user 2m12.508s 00:07:59.388 sys 0m6.890s 00:07:59.388 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:59.388 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:07:59.388 ************************************ 00:07:59.388 END TEST nvmf_fio_target 00:07:59.388 ************************************ 00:07:59.388 13:51:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:07:59.388 13:51:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:59.388 13:51:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:59.388 13:51:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:59.388 ************************************ 00:07:59.388 START TEST nvmf_bdevio 00:07:59.388 ************************************ 00:07:59.388 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:07:59.388 * Looking for test storage... 00:07:59.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.388 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:59.388 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:59.388 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:59.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.647 --rc genhtml_branch_coverage=1 00:07:59.647 --rc genhtml_function_coverage=1 00:07:59.647 --rc genhtml_legend=1 00:07:59.647 --rc geninfo_all_blocks=1 00:07:59.647 --rc geninfo_unexecuted_blocks=1 00:07:59.647 00:07:59.647 ' 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:59.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.647 --rc genhtml_branch_coverage=1 00:07:59.647 --rc genhtml_function_coverage=1 00:07:59.647 --rc genhtml_legend=1 00:07:59.647 --rc geninfo_all_blocks=1 00:07:59.647 --rc geninfo_unexecuted_blocks=1 00:07:59.647 00:07:59.647 ' 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:59.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.647 --rc genhtml_branch_coverage=1 00:07:59.647 --rc genhtml_function_coverage=1 00:07:59.647 --rc genhtml_legend=1 00:07:59.647 --rc geninfo_all_blocks=1 00:07:59.647 --rc geninfo_unexecuted_blocks=1 00:07:59.647 00:07:59.647 ' 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:59.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.647 --rc genhtml_branch_coverage=1 00:07:59.647 --rc genhtml_function_coverage=1 00:07:59.647 --rc genhtml_legend=1 00:07:59.647 --rc geninfo_all_blocks=1 00:07:59.647 --rc geninfo_unexecuted_blocks=1 00:07:59.647 00:07:59.647 ' 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:07:59.647 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.648 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.648 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.648 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.648 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.648 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.648 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:07:59.648 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.648 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:07:59.648 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:59.648 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:59.648 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.648 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.648 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.648 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:59.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:59.648 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:59.648 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:59.648 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:59.648 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:59.648 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:59.648 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:07:59.648 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:59.648 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:59.648 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:59.648 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:59.648 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:59.648 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.648 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.648 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.648 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:59.648 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:59.648 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:07:59.648 13:51:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:04.931 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:04.931 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:08:04.931 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:04.931 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:04.931 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:04.931 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:04.931 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:04.931 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:08:04.931 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:04.931 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:08:04.931 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:08:04.931 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:08:04.931 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:08:04.931 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:08:04.931 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:08:04.931 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:04.931 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:04.931 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:04.931 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:04.931 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:04.931 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:04.932 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:04.932 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:04.932 Found net devices under 0000:31:00.0: cvl_0_0 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:04.932 Found net devices under 0000:31:00.1: cvl_0_1 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:04.932 13:51:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:04.932 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:04.932 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:04.932 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:04.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:04.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:08:04.932 00:08:04.932 --- 10.0.0.2 ping statistics --- 00:08:04.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.932 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:08:04.932 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:04.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:04.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:08:04.932 00:08:04.932 --- 10.0.0.1 ping statistics --- 00:08:04.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.932 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:08:04.932 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:04.932 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:08:04.932 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:04.932 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:04.932 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:04.932 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:04.932 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:04.932 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:04.932 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:04.932 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:08:04.932 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:04.932 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:04.932 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:04.932 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=711872 00:08:04.932 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:08:04.932 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 711872 00:08:04.932 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 711872 ']' 00:08:04.932 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.932 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:04.932 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.932 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:04.932 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:04.932 [2024-11-06 13:51:44.088710] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:08:04.932 [2024-11-06 13:51:44.088775] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.933 [2024-11-06 13:51:44.165835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:04.933 [2024-11-06 13:51:44.200667] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:04.933 [2024-11-06 13:51:44.200696] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:04.933 [2024-11-06 13:51:44.200701] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:04.933 [2024-11-06 13:51:44.200706] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:04.933 [2024-11-06 13:51:44.200710] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:04.933 [2024-11-06 13:51:44.202019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:04.933 [2024-11-06 13:51:44.202180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:04.933 [2024-11-06 13:51:44.202320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:04.933 [2024-11-06 13:51:44.202533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:05.870 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:05.870 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:08:05.870 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:05.870 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:05.870 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:05.870 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:05.870 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:05.870 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.870 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:05.870 [2024-11-06 13:51:44.906339] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:05.870 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.870 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:05.870 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.870 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:05.870 Malloc0 00:08:05.870 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.870 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:05.870 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.870 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:05.870 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.870 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:05.870 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.870 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:05.870 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.870 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:05.870 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.870 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:05.870 [2024-11-06 13:51:44.963645] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.870 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.870 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:08:05.870 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:08:05.870 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:08:05.870 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:08:05.870 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:05.870 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:05.870 { 00:08:05.870 "params": { 00:08:05.870 "name": "Nvme$subsystem", 00:08:05.870 "trtype": "$TEST_TRANSPORT", 00:08:05.870 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:05.870 "adrfam": "ipv4", 00:08:05.870 "trsvcid": "$NVMF_PORT", 00:08:05.870 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:05.870 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:05.870 "hdgst": ${hdgst:-false}, 00:08:05.870 "ddgst": ${ddgst:-false} 00:08:05.870 }, 00:08:05.870 "method": "bdev_nvme_attach_controller" 00:08:05.870 } 00:08:05.870 EOF 00:08:05.870 )") 00:08:05.870 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:08:05.870 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:08:05.870 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:08:05.870 13:51:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:05.870 "params": { 00:08:05.870 "name": "Nvme1", 00:08:05.870 "trtype": "tcp", 00:08:05.870 "traddr": "10.0.0.2", 00:08:05.870 "adrfam": "ipv4", 00:08:05.870 "trsvcid": "4420", 00:08:05.870 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:05.870 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:05.870 "hdgst": false, 00:08:05.870 "ddgst": false 00:08:05.870 }, 00:08:05.870 "method": "bdev_nvme_attach_controller" 00:08:05.870 }' 00:08:05.870 [2024-11-06 13:51:45.000122] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:08:05.870 [2024-11-06 13:51:45.000173] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid712216 ] 00:08:05.870 [2024-11-06 13:51:45.079363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:05.870 [2024-11-06 13:51:45.117803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.871 [2024-11-06 13:51:45.117959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.871 [2024-11-06 13:51:45.117959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.440 I/O targets: 00:08:06.440 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:08:06.440 00:08:06.440 00:08:06.440 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.440 http://cunit.sourceforge.net/ 00:08:06.440 00:08:06.440 00:08:06.440 Suite: bdevio tests on: Nvme1n1 00:08:06.440 Test: blockdev write read block ...passed 00:08:06.440 Test: blockdev write zeroes read block ...passed 00:08:06.440 Test: blockdev write zeroes read no split ...passed 00:08:06.440 Test: blockdev write zeroes read split ...passed 00:08:06.440 Test: blockdev write zeroes read split partial ...passed 00:08:06.440 Test: blockdev reset ...[2024-11-06 13:51:45.579645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:08:06.440 [2024-11-06 13:51:45.579712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11864b0 (9): Bad file descriptor 00:08:06.440 [2024-11-06 13:51:45.635902] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:08:06.440 passed 00:08:06.440 Test: blockdev write read 8 blocks ...passed 00:08:06.440 Test: blockdev write read size > 128k ...passed 00:08:06.440 Test: blockdev write read invalid size ...passed 00:08:06.440 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:06.440 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:06.440 Test: blockdev write read max offset ...passed 00:08:06.701 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:06.701 Test: blockdev writev readv 8 blocks ...passed 00:08:06.701 Test: blockdev writev readv 30 x 1block ...passed 00:08:06.701 Test: blockdev writev readv block ...passed 00:08:06.701 Test: blockdev writev readv size > 128k ...passed 00:08:06.701 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:06.701 Test: blockdev comparev and writev ...[2024-11-06 13:51:45.898349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:06.701 [2024-11-06 13:51:45.898375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:08:06.701 [2024-11-06 13:51:45.898386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:06.701 [2024-11-06 13:51:45.898393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:08:06.701 [2024-11-06 13:51:45.898680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:06.701 [2024-11-06 13:51:45.898688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:08:06.701 [2024-11-06 13:51:45.898698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:06.701 [2024-11-06 13:51:45.898703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:08:06.701 [2024-11-06 13:51:45.899018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:06.701 [2024-11-06 13:51:45.899029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:08:06.701 [2024-11-06 13:51:45.899039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:06.701 [2024-11-06 13:51:45.899044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:08:06.701 [2024-11-06 13:51:45.899346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:06.701 [2024-11-06 13:51:45.899354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:08:06.701 [2024-11-06 13:51:45.899364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:06.701 [2024-11-06 13:51:45.899370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:08:06.701 passed 00:08:06.701 Test: blockdev nvme passthru rw ...passed 00:08:06.701 Test: blockdev nvme passthru vendor specific ...[2024-11-06 13:51:45.982881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:06.701 [2024-11-06 13:51:45.982891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:08:06.701 [2024-11-06 13:51:45.983241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:06.701 [2024-11-06 13:51:45.983251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:08:06.701 [2024-11-06 13:51:45.983586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:06.701 [2024-11-06 13:51:45.983593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:08:06.701 [2024-11-06 13:51:45.983931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:06.701 [2024-11-06 13:51:45.983938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:08:06.701 passed 00:08:06.960 Test: blockdev nvme admin passthru ...passed 00:08:06.960 Test: blockdev copy ...passed 00:08:06.960 00:08:06.960 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.960 suites 1 1 n/a 0 0 00:08:06.960 tests 23 23 23 0 0 00:08:06.960 asserts 152 152 152 0 n/a 00:08:06.960 00:08:06.960 Elapsed time = 1.239 seconds 00:08:06.960 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:06.960 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.960 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:06.960 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.960 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:08:06.960 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:08:06.960 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:06.960 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:08:06.960 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:06.960 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:08:06.960 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:06.960 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:06.960 rmmod nvme_tcp 00:08:06.960 rmmod nvme_fabrics 00:08:06.960 rmmod nvme_keyring 00:08:06.960 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:06.960 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:08:06.960 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:08:06.960 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 711872 ']' 00:08:06.960 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 711872 00:08:06.960 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 711872 ']' 00:08:06.960 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 711872 00:08:06.960 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:08:06.960 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:06.960 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 711872 00:08:07.219 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:08:07.219 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:08:07.219 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 711872' 00:08:07.219 killing process with pid 711872 00:08:07.219 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 711872 00:08:07.219 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 711872 00:08:07.219 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:07.219 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:07.219 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:07.219 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:08:07.219 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:07.219 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:08:07.219 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:08:07.219 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:07.219 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:07.219 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.219 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:07.219 13:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:09.759 00:08:09.759 real 0m9.821s 00:08:09.759 user 0m12.592s 00:08:09.759 sys 0m4.431s 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:09.759 ************************************ 00:08:09.759 END TEST nvmf_bdevio 00:08:09.759 ************************************ 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:09.759 00:08:09.759 real 4m27.921s 00:08:09.759 user 10m57.881s 00:08:09.759 sys 1m24.879s 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:09.759 ************************************ 00:08:09.759 END TEST nvmf_target_core 00:08:09.759 ************************************ 00:08:09.759 13:51:48 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:08:09.759 13:51:48 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:09.759 13:51:48 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:09.759 13:51:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:09.759 ************************************ 00:08:09.759 START TEST nvmf_target_extra 00:08:09.759 ************************************ 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:08:09.759 * Looking for test storage... 00:08:09.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:09.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.759 --rc genhtml_branch_coverage=1 00:08:09.759 --rc genhtml_function_coverage=1 00:08:09.759 --rc genhtml_legend=1 00:08:09.759 --rc geninfo_all_blocks=1 00:08:09.759 --rc geninfo_unexecuted_blocks=1 00:08:09.759 00:08:09.759 ' 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:09.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.759 --rc genhtml_branch_coverage=1 00:08:09.759 --rc genhtml_function_coverage=1 00:08:09.759 --rc genhtml_legend=1 00:08:09.759 --rc geninfo_all_blocks=1 00:08:09.759 --rc geninfo_unexecuted_blocks=1 00:08:09.759 00:08:09.759 ' 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:09.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.759 --rc genhtml_branch_coverage=1 00:08:09.759 --rc genhtml_function_coverage=1 00:08:09.759 --rc genhtml_legend=1 00:08:09.759 --rc geninfo_all_blocks=1 00:08:09.759 --rc geninfo_unexecuted_blocks=1 00:08:09.759 00:08:09.759 ' 00:08:09.759 13:51:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:09.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.759 --rc genhtml_branch_coverage=1 00:08:09.759 --rc genhtml_function_coverage=1 00:08:09.759 --rc genhtml_legend=1 00:08:09.759 --rc geninfo_all_blocks=1 00:08:09.759 --rc geninfo_unexecuted_blocks=1 00:08:09.759 00:08:09.760 ' 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:09.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:08:09.760 ************************************ 00:08:09.760 START TEST nvmf_example 00:08:09.760 ************************************ 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:09.760 * Looking for test storage... 00:08:09.760 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:09.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.760 --rc genhtml_branch_coverage=1 00:08:09.760 --rc genhtml_function_coverage=1 00:08:09.760 --rc genhtml_legend=1 00:08:09.760 --rc geninfo_all_blocks=1 00:08:09.760 --rc geninfo_unexecuted_blocks=1 00:08:09.760 00:08:09.760 ' 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:09.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.760 --rc genhtml_branch_coverage=1 00:08:09.760 --rc genhtml_function_coverage=1 00:08:09.760 --rc genhtml_legend=1 00:08:09.760 --rc geninfo_all_blocks=1 00:08:09.760 --rc geninfo_unexecuted_blocks=1 00:08:09.760 00:08:09.760 ' 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:09.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.760 --rc genhtml_branch_coverage=1 00:08:09.760 --rc genhtml_function_coverage=1 00:08:09.760 --rc genhtml_legend=1 00:08:09.760 --rc geninfo_all_blocks=1 00:08:09.760 --rc geninfo_unexecuted_blocks=1 00:08:09.760 00:08:09.760 ' 00:08:09.760 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:09.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.760 --rc genhtml_branch_coverage=1 00:08:09.760 --rc genhtml_function_coverage=1 00:08:09.760 --rc genhtml_legend=1 00:08:09.760 --rc geninfo_all_blocks=1 00:08:09.760 --rc geninfo_unexecuted_blocks=1 00:08:09.760 00:08:09.760 ' 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:09.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:08:09.761 13:51:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:15.033 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:15.033 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:08:15.033 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:15.033 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:15.033 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:15.033 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:15.033 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:15.033 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:08:15.033 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:15.033 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:08:15.033 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:08:15.033 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:08:15.033 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:08:15.033 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:08:15.033 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:08:15.033 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:15.033 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:15.033 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:15.033 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:15.033 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:15.033 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:15.033 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:15.033 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:15.033 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:15.033 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:15.033 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:15.033 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:15.033 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:15.033 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:15.033 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:15.033 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:15.033 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:15.033 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:15.033 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:15.033 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:15.034 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:15.034 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:15.034 Found net devices under 0000:31:00.0: cvl_0_0 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:15.034 Found net devices under 0000:31:00.1: cvl_0_1 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:15.034 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.034 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.485 ms 00:08:15.034 00:08:15.034 --- 10.0.0.2 ping statistics --- 00:08:15.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.034 rtt min/avg/max/mdev = 0.485/0.485/0.485/0.000 ms 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:15.034 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.034 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:08:15.034 00:08:15.034 --- 10.0.0.1 ping statistics --- 00:08:15.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.034 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:15.034 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:15.035 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.035 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:15.035 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:15.035 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:15.035 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:15.035 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:15.035 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:15.035 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:15.035 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:15.035 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=716971 00:08:15.035 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:15.035 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 716971 00:08:15.035 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 716971 ']' 00:08:15.035 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.035 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:15.035 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.035 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:15.035 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:15.035 13:51:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:15.973 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:15.973 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:08:15.973 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:15.973 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:15.974 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:15.974 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:15.974 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.974 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:15.974 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.974 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:15.974 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.974 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:15.974 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.974 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:15.974 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:15.974 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.974 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:15.974 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.974 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:15.974 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:15.974 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.974 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:15.974 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.974 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:15.974 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.974 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:15.974 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.974 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:15.974 13:51:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:28.182 Initializing NVMe Controllers 00:08:28.182 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:28.182 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:28.182 Initialization complete. Launching workers. 00:08:28.182 ======================================================== 00:08:28.182 Latency(us) 00:08:28.182 Device Information : IOPS MiB/s Average min max 00:08:28.182 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19676.90 76.86 3253.87 625.03 15506.80 00:08:28.182 ======================================================== 00:08:28.182 Total : 19676.90 76.86 3253.87 625.03 15506.80 00:08:28.182 00:08:28.182 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:28.182 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:28.182 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:28.182 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:08:28.182 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:28.182 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:08:28.182 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:28.182 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:28.182 rmmod nvme_tcp 00:08:28.182 rmmod nvme_fabrics 00:08:28.182 rmmod nvme_keyring 00:08:28.182 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:28.182 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:08:28.182 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:08:28.182 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 716971 ']' 00:08:28.182 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 716971 00:08:28.182 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 716971 ']' 00:08:28.182 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 716971 00:08:28.182 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:08:28.182 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:28.182 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 716971 00:08:28.182 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:08:28.182 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:08:28.182 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 716971' 00:08:28.182 killing process with pid 716971 00:08:28.182 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 716971 00:08:28.182 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 716971 00:08:28.182 nvmf threads initialize successfully 00:08:28.182 bdev subsystem init successfully 00:08:28.182 created a nvmf target service 00:08:28.182 create targets's poll groups done 00:08:28.182 all subsystems of target started 00:08:28.182 nvmf target is running 00:08:28.182 all subsystems of target stopped 00:08:28.182 destroy targets's poll groups done 00:08:28.182 destroyed the nvmf target service 00:08:28.182 bdev subsystem finish successfully 00:08:28.182 nvmf threads destroy successfully 00:08:28.182 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:28.182 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:28.182 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:28.182 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:08:28.182 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:08:28.182 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:28.182 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:08:28.182 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:28.182 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:28.182 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.182 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:28.182 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.441 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:28.441 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:28.441 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:28.441 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:28.441 00:08:28.441 real 0m19.031s 00:08:28.441 user 0m45.286s 00:08:28.441 sys 0m5.267s 00:08:28.441 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:28.441 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:28.441 ************************************ 00:08:28.441 END TEST nvmf_example 00:08:28.441 ************************************ 00:08:28.441 13:52:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:28.441 13:52:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:28.441 13:52:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:28.441 13:52:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:08:28.702 ************************************ 00:08:28.702 START TEST nvmf_filesystem 00:08:28.702 ************************************ 00:08:28.702 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:28.702 * Looking for test storage... 00:08:28.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:28.702 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:28.702 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:08:28.702 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:28.702 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:28.702 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:28.702 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:28.702 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:28.702 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:28.702 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:28.702 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:28.702 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:28.702 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:28.702 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:28.702 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:28.702 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:28.702 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:08:28.702 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:08:28.702 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:28.702 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:28.702 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:28.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.703 --rc genhtml_branch_coverage=1 00:08:28.703 --rc genhtml_function_coverage=1 00:08:28.703 --rc genhtml_legend=1 00:08:28.703 --rc geninfo_all_blocks=1 00:08:28.703 --rc geninfo_unexecuted_blocks=1 00:08:28.703 00:08:28.703 ' 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:28.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.703 --rc genhtml_branch_coverage=1 00:08:28.703 --rc genhtml_function_coverage=1 00:08:28.703 --rc genhtml_legend=1 00:08:28.703 --rc geninfo_all_blocks=1 00:08:28.703 --rc geninfo_unexecuted_blocks=1 00:08:28.703 00:08:28.703 ' 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:28.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.703 --rc genhtml_branch_coverage=1 00:08:28.703 --rc genhtml_function_coverage=1 00:08:28.703 --rc genhtml_legend=1 00:08:28.703 --rc geninfo_all_blocks=1 00:08:28.703 --rc geninfo_unexecuted_blocks=1 00:08:28.703 00:08:28.703 ' 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:28.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.703 --rc genhtml_branch_coverage=1 00:08:28.703 --rc genhtml_function_coverage=1 00:08:28.703 --rc genhtml_legend=1 00:08:28.703 --rc geninfo_all_blocks=1 00:08:28.703 --rc geninfo_unexecuted_blocks=1 00:08:28.703 00:08:28.703 ' 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:08:28.703 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:28.704 #define SPDK_CONFIG_H 00:08:28.704 #define SPDK_CONFIG_AIO_FSDEV 1 00:08:28.704 #define SPDK_CONFIG_APPS 1 00:08:28.704 #define SPDK_CONFIG_ARCH native 00:08:28.704 #undef SPDK_CONFIG_ASAN 00:08:28.704 #undef SPDK_CONFIG_AVAHI 00:08:28.704 #undef SPDK_CONFIG_CET 00:08:28.704 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:08:28.704 #define SPDK_CONFIG_COVERAGE 1 00:08:28.704 #define SPDK_CONFIG_CROSS_PREFIX 00:08:28.704 #undef SPDK_CONFIG_CRYPTO 00:08:28.704 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:28.704 #undef SPDK_CONFIG_CUSTOMOCF 00:08:28.704 #undef SPDK_CONFIG_DAOS 00:08:28.704 #define SPDK_CONFIG_DAOS_DIR 00:08:28.704 #define SPDK_CONFIG_DEBUG 1 00:08:28.704 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:28.704 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:28.704 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:28.704 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:28.704 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:28.704 #undef SPDK_CONFIG_DPDK_UADK 00:08:28.704 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:28.704 #define SPDK_CONFIG_EXAMPLES 1 00:08:28.704 #undef SPDK_CONFIG_FC 00:08:28.704 #define SPDK_CONFIG_FC_PATH 00:08:28.704 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:28.704 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:28.704 #define SPDK_CONFIG_FSDEV 1 00:08:28.704 #undef SPDK_CONFIG_FUSE 00:08:28.704 #undef SPDK_CONFIG_FUZZER 00:08:28.704 #define SPDK_CONFIG_FUZZER_LIB 00:08:28.704 #undef SPDK_CONFIG_GOLANG 00:08:28.704 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:28.704 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:28.704 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:28.704 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:28.704 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:28.704 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:28.704 #undef SPDK_CONFIG_HAVE_LZ4 00:08:28.704 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:08:28.704 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:08:28.704 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:28.704 #define SPDK_CONFIG_IDXD 1 00:08:28.704 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:28.704 #undef SPDK_CONFIG_IPSEC_MB 00:08:28.704 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:28.704 #define SPDK_CONFIG_ISAL 1 00:08:28.704 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:28.704 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:28.704 #define SPDK_CONFIG_LIBDIR 00:08:28.704 #undef SPDK_CONFIG_LTO 00:08:28.704 #define SPDK_CONFIG_MAX_LCORES 128 00:08:28.704 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:08:28.704 #define SPDK_CONFIG_NVME_CUSE 1 00:08:28.704 #undef SPDK_CONFIG_OCF 00:08:28.704 #define SPDK_CONFIG_OCF_PATH 00:08:28.704 #define SPDK_CONFIG_OPENSSL_PATH 00:08:28.704 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:28.704 #define SPDK_CONFIG_PGO_DIR 00:08:28.704 #undef SPDK_CONFIG_PGO_USE 00:08:28.704 #define SPDK_CONFIG_PREFIX /usr/local 00:08:28.704 #undef SPDK_CONFIG_RAID5F 00:08:28.704 #undef SPDK_CONFIG_RBD 00:08:28.704 #define SPDK_CONFIG_RDMA 1 00:08:28.704 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:28.704 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:28.704 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:28.704 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:28.704 #define SPDK_CONFIG_SHARED 1 00:08:28.704 #undef SPDK_CONFIG_SMA 00:08:28.704 #define SPDK_CONFIG_TESTS 1 00:08:28.704 #undef SPDK_CONFIG_TSAN 00:08:28.704 #define SPDK_CONFIG_UBLK 1 00:08:28.704 #define SPDK_CONFIG_UBSAN 1 00:08:28.704 #undef SPDK_CONFIG_UNIT_TESTS 00:08:28.704 #undef SPDK_CONFIG_URING 00:08:28.704 #define SPDK_CONFIG_URING_PATH 00:08:28.704 #undef SPDK_CONFIG_URING_ZNS 00:08:28.704 #undef SPDK_CONFIG_USDT 00:08:28.704 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:28.704 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:28.704 #define SPDK_CONFIG_VFIO_USER 1 00:08:28.704 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:28.704 #define SPDK_CONFIG_VHOST 1 00:08:28.704 #define SPDK_CONFIG_VIRTIO 1 00:08:28.704 #undef SPDK_CONFIG_VTUNE 00:08:28.704 #define SPDK_CONFIG_VTUNE_DIR 00:08:28.704 #define SPDK_CONFIG_WERROR 1 00:08:28.704 #define SPDK_CONFIG_WPDK_DIR 00:08:28.704 #undef SPDK_CONFIG_XNVME 00:08:28.704 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.704 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:08:28.705 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:28.706 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 720084 ]] 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 720084 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.fhWSie 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.fhWSie/tests/target /tmp/spdk.fhWSie 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=123606446080 00:08:28.707 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356517376 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5750071296 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64668225536 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678256640 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847713792 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871306752 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23592960 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=349184 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=154624 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64677826560 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678260736 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=434176 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935639040 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935651328 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:08:28.708 * Looking for test storage... 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=123606446080 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=7964663808 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:28.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:08:28.708 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:28.967 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:28.967 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:28.967 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:28.967 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:28.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.968 --rc genhtml_branch_coverage=1 00:08:28.968 --rc genhtml_function_coverage=1 00:08:28.968 --rc genhtml_legend=1 00:08:28.968 --rc geninfo_all_blocks=1 00:08:28.968 --rc geninfo_unexecuted_blocks=1 00:08:28.968 00:08:28.968 ' 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:28.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.968 --rc genhtml_branch_coverage=1 00:08:28.968 --rc genhtml_function_coverage=1 00:08:28.968 --rc genhtml_legend=1 00:08:28.968 --rc geninfo_all_blocks=1 00:08:28.968 --rc geninfo_unexecuted_blocks=1 00:08:28.968 00:08:28.968 ' 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:28.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.968 --rc genhtml_branch_coverage=1 00:08:28.968 --rc genhtml_function_coverage=1 00:08:28.968 --rc genhtml_legend=1 00:08:28.968 --rc geninfo_all_blocks=1 00:08:28.968 --rc geninfo_unexecuted_blocks=1 00:08:28.968 00:08:28.968 ' 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:28.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.968 --rc genhtml_branch_coverage=1 00:08:28.968 --rc genhtml_function_coverage=1 00:08:28.968 --rc genhtml_legend=1 00:08:28.968 --rc geninfo_all_blocks=1 00:08:28.968 --rc geninfo_unexecuted_blocks=1 00:08:28.968 00:08:28.968 ' 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:28.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:28.968 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:28.969 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:28.969 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.969 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:28.969 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.969 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:28.969 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:28.969 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:28.969 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:34.244 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:34.244 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:34.244 Found net devices under 0000:31:00.0: cvl_0_0 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:34.244 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.245 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:34.245 Found net devices under 0000:31:00.1: cvl_0_1 00:08:34.245 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.245 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:34.245 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:34.245 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:34.245 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:34.245 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:34.245 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:34.245 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:34.245 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:34.245 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:34.245 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:34.245 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:34.245 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:34.245 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:34.245 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:34.245 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:34.245 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:34.245 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:34.245 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:34.245 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:34.245 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:34.245 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:34.245 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:34.245 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:34.245 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:34.505 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:34.505 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:34.505 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:34.505 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:34.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:34.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.564 ms 00:08:34.505 00:08:34.505 --- 10.0.0.2 ping statistics --- 00:08:34.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.505 rtt min/avg/max/mdev = 0.564/0.564/0.564/0.000 ms 00:08:34.505 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:34.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:34.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:08:34.505 00:08:34.505 --- 10.0.0.1 ping statistics --- 00:08:34.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.505 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:08:34.505 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:34.505 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:08:34.505 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:34.505 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:34.505 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:34.505 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:34.505 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:34.505 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:34.505 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:34.505 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:34.505 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:34.505 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:34.505 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:34.505 ************************************ 00:08:34.505 START TEST nvmf_filesystem_no_in_capsule 00:08:34.505 ************************************ 00:08:34.505 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:08:34.505 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:08:34.505 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:34.505 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:34.505 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:34.505 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:34.505 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=723909 00:08:34.505 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 723909 00:08:34.505 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 723909 ']' 00:08:34.505 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.505 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:34.505 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.505 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:34.505 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:34.505 13:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:34.505 [2024-11-06 13:52:13.704180] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:08:34.505 [2024-11-06 13:52:13.704241] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.765 [2024-11-06 13:52:13.795768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:34.765 [2024-11-06 13:52:13.847893] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:34.765 [2024-11-06 13:52:13.847941] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:34.765 [2024-11-06 13:52:13.847949] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:34.765 [2024-11-06 13:52:13.847956] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:34.765 [2024-11-06 13:52:13.847962] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:34.765 [2024-11-06 13:52:13.849763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.765 [2024-11-06 13:52:13.849923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:34.765 [2024-11-06 13:52:13.850125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:34.765 [2024-11-06 13:52:13.850126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.390 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:35.390 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:08:35.390 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:35.390 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:35.390 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:35.390 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:35.390 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:35.390 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:35.390 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.390 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:35.390 [2024-11-06 13:52:14.513909] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:35.390 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.390 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:35.390 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.390 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:35.390 Malloc1 00:08:35.391 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.391 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:35.391 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.391 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:35.391 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.391 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:35.391 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.391 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:35.391 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.391 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:35.391 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.391 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:35.391 [2024-11-06 13:52:14.629781] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:35.391 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.391 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:35.391 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:08:35.391 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:08:35.391 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:08:35.391 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:08:35.687 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:35.687 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.688 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:35.688 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.688 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:08:35.688 { 00:08:35.688 "name": "Malloc1", 00:08:35.688 "aliases": [ 00:08:35.688 "053b80ad-0c77-4c24-a15f-9c1651a0a7ab" 00:08:35.688 ], 00:08:35.688 "product_name": "Malloc disk", 00:08:35.688 "block_size": 512, 00:08:35.688 "num_blocks": 1048576, 00:08:35.688 "uuid": "053b80ad-0c77-4c24-a15f-9c1651a0a7ab", 00:08:35.688 "assigned_rate_limits": { 00:08:35.688 "rw_ios_per_sec": 0, 00:08:35.688 "rw_mbytes_per_sec": 0, 00:08:35.688 "r_mbytes_per_sec": 0, 00:08:35.688 "w_mbytes_per_sec": 0 00:08:35.688 }, 00:08:35.688 "claimed": true, 00:08:35.688 "claim_type": "exclusive_write", 00:08:35.688 "zoned": false, 00:08:35.688 "supported_io_types": { 00:08:35.688 "read": true, 00:08:35.688 "write": true, 00:08:35.688 "unmap": true, 00:08:35.688 "flush": true, 00:08:35.688 "reset": true, 00:08:35.688 "nvme_admin": false, 00:08:35.688 "nvme_io": false, 00:08:35.688 "nvme_io_md": false, 00:08:35.688 "write_zeroes": true, 00:08:35.688 "zcopy": true, 00:08:35.688 "get_zone_info": false, 00:08:35.688 "zone_management": false, 00:08:35.688 "zone_append": false, 00:08:35.688 "compare": false, 00:08:35.688 "compare_and_write": false, 00:08:35.688 "abort": true, 00:08:35.688 "seek_hole": false, 00:08:35.688 "seek_data": false, 00:08:35.688 "copy": true, 00:08:35.688 "nvme_iov_md": false 00:08:35.688 }, 00:08:35.688 "memory_domains": [ 00:08:35.688 { 00:08:35.688 "dma_device_id": "system", 00:08:35.688 "dma_device_type": 1 00:08:35.688 }, 00:08:35.688 { 00:08:35.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.688 "dma_device_type": 2 00:08:35.688 } 00:08:35.688 ], 00:08:35.688 "driver_specific": {} 00:08:35.688 } 00:08:35.688 ]' 00:08:35.688 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:08:35.688 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:08:35.688 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:08:35.688 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:08:35.688 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:08:35.688 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:08:35.688 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:35.688 13:52:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:37.072 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:37.072 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:08:37.072 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:08:37.072 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:08:37.072 13:52:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:08:38.980 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:08:38.980 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:08:38.980 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:08:38.980 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:08:38.980 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:08:38.980 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:08:38.980 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:38.980 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:38.980 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:38.980 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:38.980 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:38.980 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:38.980 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:38.980 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:38.980 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:38.980 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:38.980 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:39.238 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:39.497 13:52:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:40.433 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:40.433 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:40.433 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:40.433 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:40.433 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:40.433 ************************************ 00:08:40.433 START TEST filesystem_ext4 00:08:40.433 ************************************ 00:08:40.433 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:40.433 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:40.433 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:40.433 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:40.433 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:08:40.433 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:08:40.433 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:08:40.433 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:08:40.433 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:08:40.433 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:08:40.433 13:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:40.433 mke2fs 1.47.0 (5-Feb-2023) 00:08:40.692 Discarding device blocks: 0/522240 done 00:08:40.692 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:40.692 Filesystem UUID: c58b4542-4bed-4658-93c9-a4f6de284ca0 00:08:40.692 Superblock backups stored on blocks: 00:08:40.692 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:40.692 00:08:40.692 Allocating group tables: 0/64 done 00:08:40.692 Writing inode tables: 0/64 done 00:08:42.070 Creating journal (8192 blocks): done 00:08:43.997 Writing superblocks and filesystem accounting information: 0/6428/64 done 00:08:43.997 00:08:43.997 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:08:43.997 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:50.566 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:50.566 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:50.566 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:50.566 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:50.566 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:50.566 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:50.566 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 723909 00:08:50.566 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:50.566 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:50.566 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:50.566 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:50.566 00:08:50.566 real 0m9.797s 00:08:50.566 user 0m0.016s 00:08:50.566 sys 0m0.044s 00:08:50.566 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:50.566 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:50.566 ************************************ 00:08:50.566 END TEST filesystem_ext4 00:08:50.566 ************************************ 00:08:50.566 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:50.566 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:50.566 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:50.566 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:50.566 ************************************ 00:08:50.566 START TEST filesystem_btrfs 00:08:50.566 ************************************ 00:08:50.566 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:50.566 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:50.566 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:50.566 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:50.566 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:08:50.566 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:08:50.566 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:08:50.566 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:08:50.566 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:08:50.566 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:08:50.566 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:50.566 btrfs-progs v6.8.1 00:08:50.566 See https://btrfs.readthedocs.io for more information. 00:08:50.566 00:08:50.566 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:50.566 NOTE: several default settings have changed in version 5.15, please make sure 00:08:50.566 this does not affect your deployments: 00:08:50.566 - DUP for metadata (-m dup) 00:08:50.566 - enabled no-holes (-O no-holes) 00:08:50.566 - enabled free-space-tree (-R free-space-tree) 00:08:50.566 00:08:50.566 Label: (null) 00:08:50.566 UUID: 356a34cb-7069-4e07-afa7-547414b35d1d 00:08:50.566 Node size: 16384 00:08:50.566 Sector size: 4096 (CPU page size: 4096) 00:08:50.566 Filesystem size: 510.00MiB 00:08:50.566 Block group profiles: 00:08:50.566 Data: single 8.00MiB 00:08:50.566 Metadata: DUP 32.00MiB 00:08:50.566 System: DUP 8.00MiB 00:08:50.566 SSD detected: yes 00:08:50.566 Zoned device: no 00:08:50.566 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:50.566 Checksum: crc32c 00:08:50.566 Number of devices: 1 00:08:50.566 Devices: 00:08:50.566 ID SIZE PATH 00:08:50.566 1 510.00MiB /dev/nvme0n1p1 00:08:50.566 00:08:50.566 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:08:50.566 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:51.134 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:51.134 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:51.134 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:51.134 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:51.134 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:51.134 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:51.134 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 723909 00:08:51.134 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:51.134 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:51.135 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:51.135 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:51.135 00:08:51.135 real 0m0.838s 00:08:51.135 user 0m0.018s 00:08:51.135 sys 0m0.073s 00:08:51.135 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:51.135 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:51.135 ************************************ 00:08:51.135 END TEST filesystem_btrfs 00:08:51.135 ************************************ 00:08:51.135 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:51.135 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:51.135 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:51.135 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:51.135 ************************************ 00:08:51.135 START TEST filesystem_xfs 00:08:51.135 ************************************ 00:08:51.135 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:08:51.135 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:51.135 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:51.135 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:51.135 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:08:51.135 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:08:51.135 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:08:51.135 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:08:51.135 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:08:51.135 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:08:51.135 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:51.394 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:51.394 = sectsz=512 attr=2, projid32bit=1 00:08:51.394 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:51.394 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:51.394 data = bsize=4096 blocks=130560, imaxpct=25 00:08:51.394 = sunit=0 swidth=0 blks 00:08:51.394 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:51.394 log =internal log bsize=4096 blocks=16384, version=2 00:08:51.394 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:51.394 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:52.331 Discarding blocks...Done. 00:08:52.331 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:08:52.331 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:55.615 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:55.615 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:55.615 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:55.615 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:55.615 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:55.615 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:55.615 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 723909 00:08:55.615 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:55.615 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:55.615 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:55.615 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:55.615 00:08:55.615 real 0m4.518s 00:08:55.615 user 0m0.014s 00:08:55.615 sys 0m0.070s 00:08:55.615 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:55.615 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:55.615 ************************************ 00:08:55.615 END TEST filesystem_xfs 00:08:55.615 ************************************ 00:08:55.875 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:55.875 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:55.875 13:52:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:55.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.875 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:55.875 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:08:55.875 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:08:55.875 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:55.875 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:55.875 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:08:55.875 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:08:55.875 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:55.875 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.875 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:55.875 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.875 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:55.875 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 723909 00:08:55.875 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 723909 ']' 00:08:55.875 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 723909 00:08:55.875 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:08:55.875 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:55.875 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 723909 00:08:55.875 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:55.875 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:55.875 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 723909' 00:08:55.875 killing process with pid 723909 00:08:55.875 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 723909 00:08:55.875 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 723909 00:08:56.134 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:56.134 00:08:56.134 real 0m21.700s 00:08:56.134 user 1m25.763s 00:08:56.134 sys 0m1.099s 00:08:56.134 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:56.134 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:56.134 ************************************ 00:08:56.134 END TEST nvmf_filesystem_no_in_capsule 00:08:56.134 ************************************ 00:08:56.134 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:56.134 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:56.134 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:56.134 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:56.134 ************************************ 00:08:56.134 START TEST nvmf_filesystem_in_capsule 00:08:56.134 ************************************ 00:08:56.134 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:08:56.134 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:56.134 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:56.134 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:56.134 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:56.134 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:56.134 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=728988 00:08:56.134 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 728988 00:08:56.134 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 728988 ']' 00:08:56.134 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.134 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:56.134 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:56.134 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.134 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:56.134 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:56.392 [2024-11-06 13:52:35.437532] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:08:56.392 [2024-11-06 13:52:35.437579] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.392 [2024-11-06 13:52:35.507601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:56.392 [2024-11-06 13:52:35.537203] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:56.392 [2024-11-06 13:52:35.537229] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:56.392 [2024-11-06 13:52:35.537235] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:56.392 [2024-11-06 13:52:35.537240] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:56.392 [2024-11-06 13:52:35.537252] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:56.392 [2024-11-06 13:52:35.538521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.392 [2024-11-06 13:52:35.538673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:56.392 [2024-11-06 13:52:35.538820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.392 [2024-11-06 13:52:35.538822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:56.392 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:56.392 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:08:56.392 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:56.392 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:56.392 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:56.392 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:56.392 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:56.392 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:56.392 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.392 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:56.392 [2024-11-06 13:52:35.643082] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:56.392 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.392 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:56.392 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.392 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:56.654 Malloc1 00:08:56.654 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.654 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:56.654 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.654 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:56.654 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.654 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:56.654 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.654 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:56.654 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.654 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:56.654 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.654 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:56.654 [2024-11-06 13:52:35.762126] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:56.654 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.654 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:56.654 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:08:56.654 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:08:56.654 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:08:56.654 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:08:56.654 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:56.654 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.654 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:56.654 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.654 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:08:56.654 { 00:08:56.654 "name": "Malloc1", 00:08:56.654 "aliases": [ 00:08:56.654 "65fc88c0-b66c-4c48-b1af-25663529745e" 00:08:56.654 ], 00:08:56.654 "product_name": "Malloc disk", 00:08:56.654 "block_size": 512, 00:08:56.654 "num_blocks": 1048576, 00:08:56.654 "uuid": "65fc88c0-b66c-4c48-b1af-25663529745e", 00:08:56.654 "assigned_rate_limits": { 00:08:56.654 "rw_ios_per_sec": 0, 00:08:56.654 "rw_mbytes_per_sec": 0, 00:08:56.654 "r_mbytes_per_sec": 0, 00:08:56.654 "w_mbytes_per_sec": 0 00:08:56.654 }, 00:08:56.654 "claimed": true, 00:08:56.654 "claim_type": "exclusive_write", 00:08:56.654 "zoned": false, 00:08:56.654 "supported_io_types": { 00:08:56.654 "read": true, 00:08:56.654 "write": true, 00:08:56.654 "unmap": true, 00:08:56.654 "flush": true, 00:08:56.654 "reset": true, 00:08:56.654 "nvme_admin": false, 00:08:56.654 "nvme_io": false, 00:08:56.654 "nvme_io_md": false, 00:08:56.654 "write_zeroes": true, 00:08:56.654 "zcopy": true, 00:08:56.654 "get_zone_info": false, 00:08:56.654 "zone_management": false, 00:08:56.654 "zone_append": false, 00:08:56.654 "compare": false, 00:08:56.654 "compare_and_write": false, 00:08:56.654 "abort": true, 00:08:56.654 "seek_hole": false, 00:08:56.654 "seek_data": false, 00:08:56.654 "copy": true, 00:08:56.654 "nvme_iov_md": false 00:08:56.654 }, 00:08:56.654 "memory_domains": [ 00:08:56.654 { 00:08:56.654 "dma_device_id": "system", 00:08:56.654 "dma_device_type": 1 00:08:56.654 }, 00:08:56.654 { 00:08:56.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.654 "dma_device_type": 2 00:08:56.654 } 00:08:56.654 ], 00:08:56.654 "driver_specific": {} 00:08:56.654 } 00:08:56.654 ]' 00:08:56.654 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:08:56.654 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:08:56.654 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:08:56.654 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:08:56.654 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:08:56.654 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:08:56.654 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:56.654 13:52:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:58.032 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:58.032 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:08:58.032 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:08:58.032 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:08:58.032 13:52:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:09:00.567 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:00.567 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:00.567 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:00.567 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:09:00.567 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:00.567 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:09:00.567 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:00.567 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:00.568 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:00.568 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:00.568 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:00.568 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:00.568 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:00.568 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:00.568 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:00.568 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:00.568 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:00.568 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:00.568 13:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:01.505 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:01.505 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:01.505 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:01.505 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:01.505 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:01.505 ************************************ 00:09:01.505 START TEST filesystem_in_capsule_ext4 00:09:01.505 ************************************ 00:09:01.505 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:01.505 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:01.505 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:01.506 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:01.506 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:09:01.506 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:09:01.506 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:09:01.506 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:09:01.506 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:09:01.506 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:09:01.506 13:52:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:01.506 mke2fs 1.47.0 (5-Feb-2023) 00:09:01.506 Discarding device blocks: 0/522240 done 00:09:01.506 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:01.506 Filesystem UUID: 43cd8250-adc1-455e-982a-16672471b0c7 00:09:01.506 Superblock backups stored on blocks: 00:09:01.506 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:01.506 00:09:01.506 Allocating group tables: 0/64 done 00:09:01.506 Writing inode tables: 0/64 done 00:09:01.765 Creating journal (8192 blocks): done 00:09:02.283 Writing superblocks and filesystem accounting information: 0/64 done 00:09:02.283 00:09:02.283 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:09:02.283 13:52:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:08.848 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:08.848 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:09:08.848 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:08.848 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:09:08.848 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:08.848 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:08.848 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 728988 00:09:08.848 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:08.848 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:08.848 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:08.848 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:08.848 00:09:08.849 real 0m6.961s 00:09:08.849 user 0m0.014s 00:09:08.849 sys 0m0.038s 00:09:08.849 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:08.849 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:08.849 ************************************ 00:09:08.849 END TEST filesystem_in_capsule_ext4 00:09:08.849 ************************************ 00:09:08.849 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:08.849 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:08.849 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:08.849 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:08.849 ************************************ 00:09:08.849 START TEST filesystem_in_capsule_btrfs 00:09:08.849 ************************************ 00:09:08.849 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:08.849 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:08.849 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:08.849 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:08.849 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:09:08.849 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:09:08.849 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:09:08.849 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:09:08.849 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:09:08.849 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:09:08.849 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:08.849 btrfs-progs v6.8.1 00:09:08.849 See https://btrfs.readthedocs.io for more information. 00:09:08.849 00:09:08.849 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:08.849 NOTE: several default settings have changed in version 5.15, please make sure 00:09:08.849 this does not affect your deployments: 00:09:08.849 - DUP for metadata (-m dup) 00:09:08.849 - enabled no-holes (-O no-holes) 00:09:08.849 - enabled free-space-tree (-R free-space-tree) 00:09:08.849 00:09:08.849 Label: (null) 00:09:08.849 UUID: 42cea4b5-155e-4944-9819-c9a6daa066ca 00:09:08.849 Node size: 16384 00:09:08.849 Sector size: 4096 (CPU page size: 4096) 00:09:08.849 Filesystem size: 510.00MiB 00:09:08.849 Block group profiles: 00:09:08.849 Data: single 8.00MiB 00:09:08.849 Metadata: DUP 32.00MiB 00:09:08.849 System: DUP 8.00MiB 00:09:08.849 SSD detected: yes 00:09:08.849 Zoned device: no 00:09:08.849 Features: extref, skinny-metadata, no-holes, free-space-tree 00:09:08.849 Checksum: crc32c 00:09:08.849 Number of devices: 1 00:09:08.849 Devices: 00:09:08.849 ID SIZE PATH 00:09:08.849 1 510.00MiB /dev/nvme0n1p1 00:09:08.849 00:09:08.849 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:09:08.849 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:09.785 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:09.785 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:09:09.785 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:09.785 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:09:09.785 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:09.785 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:09.785 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 728988 00:09:09.785 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:09.785 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:09.785 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:09.785 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:09.785 00:09:09.785 real 0m1.284s 00:09:09.785 user 0m0.009s 00:09:09.785 sys 0m0.048s 00:09:09.785 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:09.785 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:09.785 ************************************ 00:09:09.785 END TEST filesystem_in_capsule_btrfs 00:09:09.785 ************************************ 00:09:09.785 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:09.785 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:09.785 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:09.785 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:09.785 ************************************ 00:09:09.785 START TEST filesystem_in_capsule_xfs 00:09:09.785 ************************************ 00:09:09.785 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:09:09.785 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:09.785 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:09.785 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:09.785 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:09:09.785 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:09:09.785 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:09:09.785 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:09:09.785 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:09:09.785 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:09:09.785 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:10.044 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:10.044 = sectsz=512 attr=2, projid32bit=1 00:09:10.044 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:10.044 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:10.044 data = bsize=4096 blocks=130560, imaxpct=25 00:09:10.044 = sunit=0 swidth=0 blks 00:09:10.044 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:10.044 log =internal log bsize=4096 blocks=16384, version=2 00:09:10.044 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:10.044 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:10.609 Discarding blocks...Done. 00:09:10.610 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:09:10.610 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:13.142 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:13.142 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:09:13.142 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:13.142 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:09:13.142 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:09:13.142 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:13.142 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 728988 00:09:13.142 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:13.142 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:13.142 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:13.142 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:13.142 00:09:13.142 real 0m3.066s 00:09:13.142 user 0m0.013s 00:09:13.142 sys 0m0.037s 00:09:13.142 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:13.142 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:13.142 ************************************ 00:09:13.142 END TEST filesystem_in_capsule_xfs 00:09:13.142 ************************************ 00:09:13.142 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:13.142 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:13.142 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:13.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.142 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:13.142 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:09:13.142 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:13.142 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:13.142 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:13.142 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:13.142 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:09:13.142 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:13.142 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.142 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:13.142 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.142 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:13.142 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 728988 00:09:13.143 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 728988 ']' 00:09:13.143 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 728988 00:09:13.143 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:09:13.143 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:13.143 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 728988 00:09:13.143 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:13.143 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:13.143 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 728988' 00:09:13.143 killing process with pid 728988 00:09:13.143 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 728988 00:09:13.143 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 728988 00:09:13.401 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:13.401 00:09:13.401 real 0m17.186s 00:09:13.401 user 1m7.791s 00:09:13.401 sys 0m0.961s 00:09:13.401 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:13.401 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:13.401 ************************************ 00:09:13.401 END TEST nvmf_filesystem_in_capsule 00:09:13.401 ************************************ 00:09:13.401 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:09:13.401 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:13.401 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:09:13.401 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:13.401 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:09:13.401 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:13.401 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:13.401 rmmod nvme_tcp 00:09:13.401 rmmod nvme_fabrics 00:09:13.401 rmmod nvme_keyring 00:09:13.401 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:13.401 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:09:13.401 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:09:13.401 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:13.402 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:13.402 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:13.402 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:13.402 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:09:13.402 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:09:13.402 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:13.402 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:09:13.402 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:13.402 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:13.402 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.402 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:13.402 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:15.935 00:09:15.935 real 0m46.979s 00:09:15.935 user 2m35.270s 00:09:15.935 sys 0m6.320s 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:15.935 ************************************ 00:09:15.935 END TEST nvmf_filesystem 00:09:15.935 ************************************ 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:15.935 ************************************ 00:09:15.935 START TEST nvmf_target_discovery 00:09:15.935 ************************************ 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:15.935 * Looking for test storage... 00:09:15.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:15.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.935 --rc genhtml_branch_coverage=1 00:09:15.935 --rc genhtml_function_coverage=1 00:09:15.935 --rc genhtml_legend=1 00:09:15.935 --rc geninfo_all_blocks=1 00:09:15.935 --rc geninfo_unexecuted_blocks=1 00:09:15.935 00:09:15.935 ' 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:15.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.935 --rc genhtml_branch_coverage=1 00:09:15.935 --rc genhtml_function_coverage=1 00:09:15.935 --rc genhtml_legend=1 00:09:15.935 --rc geninfo_all_blocks=1 00:09:15.935 --rc geninfo_unexecuted_blocks=1 00:09:15.935 00:09:15.935 ' 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:15.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.935 --rc genhtml_branch_coverage=1 00:09:15.935 --rc genhtml_function_coverage=1 00:09:15.935 --rc genhtml_legend=1 00:09:15.935 --rc geninfo_all_blocks=1 00:09:15.935 --rc geninfo_unexecuted_blocks=1 00:09:15.935 00:09:15.935 ' 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:15.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.935 --rc genhtml_branch_coverage=1 00:09:15.935 --rc genhtml_function_coverage=1 00:09:15.935 --rc genhtml_legend=1 00:09:15.935 --rc geninfo_all_blocks=1 00:09:15.935 --rc geninfo_unexecuted_blocks=1 00:09:15.935 00:09:15.935 ' 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.935 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.936 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.936 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:09:15.936 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.936 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:09:15.936 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:15.936 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:15.936 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:15.936 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:15.936 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:15.936 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:15.936 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:15.936 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:15.936 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:15.936 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:15.936 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:15.936 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:15.936 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:15.936 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:09:15.936 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:09:15.936 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:15.936 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:15.936 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:15.936 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:15.936 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:15.936 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.936 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:15.936 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.936 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:15.936 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:15.936 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:09:15.936 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:21.218 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:21.218 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:09:21.218 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:21.218 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:21.218 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:21.218 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:21.218 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:21.218 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:21.219 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:21.219 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:21.219 Found net devices under 0000:31:00.0: cvl_0_0 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:21.219 Found net devices under 0000:31:00.1: cvl_0_1 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:21.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:21.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:09:21.219 00:09:21.219 --- 10.0.0.2 ping statistics --- 00:09:21.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.219 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:21.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:21.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:09:21.219 00:09:21.219 --- 10.0.0.1 ping statistics --- 00:09:21.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.219 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:21.219 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:21.220 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:21.220 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:21.220 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:21.220 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:21.220 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:21.220 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:21.220 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:21.220 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:21.220 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=737229 00:09:21.220 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 737229 00:09:21.220 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 737229 ']' 00:09:21.220 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.220 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:21.220 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.220 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:21.220 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:21.220 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:21.220 [2024-11-06 13:53:00.449679] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:09:21.220 [2024-11-06 13:53:00.449727] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:21.479 [2024-11-06 13:53:00.532794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:21.479 [2024-11-06 13:53:00.569816] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:21.479 [2024-11-06 13:53:00.569843] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:21.479 [2024-11-06 13:53:00.569852] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:21.479 [2024-11-06 13:53:00.569859] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:21.479 [2024-11-06 13:53:00.569865] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:21.479 [2024-11-06 13:53:00.571569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.479 [2024-11-06 13:53:00.571722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:21.479 [2024-11-06 13:53:00.571875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.479 [2024-11-06 13:53:00.571876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:22.047 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:22.047 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:09:22.047 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:22.047 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:22.047 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.047 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:22.047 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:22.047 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.047 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.047 [2024-11-06 13:53:01.258473] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:22.047 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.047 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:09:22.047 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:22.047 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:22.047 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.047 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.047 Null1 00:09:22.047 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.047 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:22.047 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.047 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.047 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.047 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:22.047 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.047 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.047 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.047 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:22.047 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.047 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.047 [2024-11-06 13:53:01.315575] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:22.047 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.047 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:22.047 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:22.047 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.047 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.047 Null2 00:09:22.047 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.047 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:22.047 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.047 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.306 Null3 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.306 Null4 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.306 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 4420 00:09:22.566 00:09:22.566 Discovery Log Number of Records 6, Generation counter 6 00:09:22.566 =====Discovery Log Entry 0====== 00:09:22.566 trtype: tcp 00:09:22.566 adrfam: ipv4 00:09:22.566 subtype: current discovery subsystem 00:09:22.566 treq: not required 00:09:22.566 portid: 0 00:09:22.566 trsvcid: 4420 00:09:22.566 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:22.566 traddr: 10.0.0.2 00:09:22.566 eflags: explicit discovery connections, duplicate discovery information 00:09:22.566 sectype: none 00:09:22.566 =====Discovery Log Entry 1====== 00:09:22.566 trtype: tcp 00:09:22.566 adrfam: ipv4 00:09:22.566 subtype: nvme subsystem 00:09:22.566 treq: not required 00:09:22.566 portid: 0 00:09:22.566 trsvcid: 4420 00:09:22.566 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:22.566 traddr: 10.0.0.2 00:09:22.566 eflags: none 00:09:22.566 sectype: none 00:09:22.566 =====Discovery Log Entry 2====== 00:09:22.566 trtype: tcp 00:09:22.566 adrfam: ipv4 00:09:22.566 subtype: nvme subsystem 00:09:22.566 treq: not required 00:09:22.566 portid: 0 00:09:22.566 trsvcid: 4420 00:09:22.566 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:22.566 traddr: 10.0.0.2 00:09:22.566 eflags: none 00:09:22.566 sectype: none 00:09:22.566 =====Discovery Log Entry 3====== 00:09:22.566 trtype: tcp 00:09:22.566 adrfam: ipv4 00:09:22.566 subtype: nvme subsystem 00:09:22.566 treq: not required 00:09:22.566 portid: 0 00:09:22.566 trsvcid: 4420 00:09:22.566 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:22.566 traddr: 10.0.0.2 00:09:22.566 eflags: none 00:09:22.566 sectype: none 00:09:22.566 =====Discovery Log Entry 4====== 00:09:22.566 trtype: tcp 00:09:22.566 adrfam: ipv4 00:09:22.566 subtype: nvme subsystem 00:09:22.566 treq: not required 00:09:22.566 portid: 0 00:09:22.566 trsvcid: 4420 00:09:22.566 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:22.566 traddr: 10.0.0.2 00:09:22.566 eflags: none 00:09:22.566 sectype: none 00:09:22.566 =====Discovery Log Entry 5====== 00:09:22.566 trtype: tcp 00:09:22.566 adrfam: ipv4 00:09:22.566 subtype: discovery subsystem referral 00:09:22.566 treq: not required 00:09:22.566 portid: 0 00:09:22.566 trsvcid: 4430 00:09:22.566 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:22.566 traddr: 10.0.0.2 00:09:22.566 eflags: none 00:09:22.566 sectype: none 00:09:22.566 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:22.566 Perform nvmf subsystem discovery via RPC 00:09:22.566 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:22.566 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.566 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.566 [ 00:09:22.566 { 00:09:22.566 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:22.566 "subtype": "Discovery", 00:09:22.566 "listen_addresses": [ 00:09:22.566 { 00:09:22.566 "trtype": "TCP", 00:09:22.566 "adrfam": "IPv4", 00:09:22.567 "traddr": "10.0.0.2", 00:09:22.567 "trsvcid": "4420" 00:09:22.567 } 00:09:22.567 ], 00:09:22.567 "allow_any_host": true, 00:09:22.567 "hosts": [] 00:09:22.567 }, 00:09:22.567 { 00:09:22.567 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:22.567 "subtype": "NVMe", 00:09:22.567 "listen_addresses": [ 00:09:22.567 { 00:09:22.567 "trtype": "TCP", 00:09:22.567 "adrfam": "IPv4", 00:09:22.567 "traddr": "10.0.0.2", 00:09:22.567 "trsvcid": "4420" 00:09:22.567 } 00:09:22.567 ], 00:09:22.567 "allow_any_host": true, 00:09:22.567 "hosts": [], 00:09:22.567 "serial_number": "SPDK00000000000001", 00:09:22.567 "model_number": "SPDK bdev Controller", 00:09:22.567 "max_namespaces": 32, 00:09:22.567 "min_cntlid": 1, 00:09:22.567 "max_cntlid": 65519, 00:09:22.567 "namespaces": [ 00:09:22.567 { 00:09:22.567 "nsid": 1, 00:09:22.567 "bdev_name": "Null1", 00:09:22.567 "name": "Null1", 00:09:22.567 "nguid": "87CCB806CC234FA8B4589B450EDD52D9", 00:09:22.567 "uuid": "87ccb806-cc23-4fa8-b458-9b450edd52d9" 00:09:22.567 } 00:09:22.567 ] 00:09:22.567 }, 00:09:22.567 { 00:09:22.567 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:22.567 "subtype": "NVMe", 00:09:22.567 "listen_addresses": [ 00:09:22.567 { 00:09:22.567 "trtype": "TCP", 00:09:22.567 "adrfam": "IPv4", 00:09:22.567 "traddr": "10.0.0.2", 00:09:22.567 "trsvcid": "4420" 00:09:22.567 } 00:09:22.567 ], 00:09:22.567 "allow_any_host": true, 00:09:22.567 "hosts": [], 00:09:22.567 "serial_number": "SPDK00000000000002", 00:09:22.567 "model_number": "SPDK bdev Controller", 00:09:22.567 "max_namespaces": 32, 00:09:22.567 "min_cntlid": 1, 00:09:22.567 "max_cntlid": 65519, 00:09:22.567 "namespaces": [ 00:09:22.567 { 00:09:22.567 "nsid": 1, 00:09:22.567 "bdev_name": "Null2", 00:09:22.567 "name": "Null2", 00:09:22.567 "nguid": "C931604A9965438A80B2D2DE7D3BC4B2", 00:09:22.567 "uuid": "c931604a-9965-438a-80b2-d2de7d3bc4b2" 00:09:22.567 } 00:09:22.567 ] 00:09:22.567 }, 00:09:22.567 { 00:09:22.567 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:22.567 "subtype": "NVMe", 00:09:22.567 "listen_addresses": [ 00:09:22.567 { 00:09:22.567 "trtype": "TCP", 00:09:22.567 "adrfam": "IPv4", 00:09:22.567 "traddr": "10.0.0.2", 00:09:22.567 "trsvcid": "4420" 00:09:22.567 } 00:09:22.567 ], 00:09:22.567 "allow_any_host": true, 00:09:22.567 "hosts": [], 00:09:22.567 "serial_number": "SPDK00000000000003", 00:09:22.567 "model_number": "SPDK bdev Controller", 00:09:22.567 "max_namespaces": 32, 00:09:22.567 "min_cntlid": 1, 00:09:22.567 "max_cntlid": 65519, 00:09:22.567 "namespaces": [ 00:09:22.567 { 00:09:22.567 "nsid": 1, 00:09:22.567 "bdev_name": "Null3", 00:09:22.567 "name": "Null3", 00:09:22.567 "nguid": "1CAE9025C12048178789628F99D3DAF6", 00:09:22.567 "uuid": "1cae9025-c120-4817-8789-628f99d3daf6" 00:09:22.567 } 00:09:22.567 ] 00:09:22.567 }, 00:09:22.567 { 00:09:22.567 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:22.567 "subtype": "NVMe", 00:09:22.567 "listen_addresses": [ 00:09:22.567 { 00:09:22.567 "trtype": "TCP", 00:09:22.567 "adrfam": "IPv4", 00:09:22.567 "traddr": "10.0.0.2", 00:09:22.567 "trsvcid": "4420" 00:09:22.567 } 00:09:22.567 ], 00:09:22.567 "allow_any_host": true, 00:09:22.567 "hosts": [], 00:09:22.567 "serial_number": "SPDK00000000000004", 00:09:22.567 "model_number": "SPDK bdev Controller", 00:09:22.567 "max_namespaces": 32, 00:09:22.567 "min_cntlid": 1, 00:09:22.567 "max_cntlid": 65519, 00:09:22.567 "namespaces": [ 00:09:22.567 { 00:09:22.567 "nsid": 1, 00:09:22.567 "bdev_name": "Null4", 00:09:22.567 "name": "Null4", 00:09:22.567 "nguid": "B6E65614BA10484D91D3EDF31A2F7249", 00:09:22.567 "uuid": "b6e65614-ba10-484d-91d3-edf31a2f7249" 00:09:22.567 } 00:09:22.567 ] 00:09:22.567 } 00:09:22.567 ] 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:22.567 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:22.567 rmmod nvme_tcp 00:09:22.567 rmmod nvme_fabrics 00:09:22.567 rmmod nvme_keyring 00:09:22.568 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:22.568 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:09:22.568 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:09:22.568 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 737229 ']' 00:09:22.568 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 737229 00:09:22.568 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 737229 ']' 00:09:22.568 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 737229 00:09:22.568 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:09:22.568 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:22.568 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 737229 00:09:22.568 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:22.568 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:22.568 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 737229' 00:09:22.568 killing process with pid 737229 00:09:22.568 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 737229 00:09:22.568 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 737229 00:09:22.827 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:22.827 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:22.827 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:22.827 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:09:22.827 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:09:22.827 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:09:22.827 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:22.827 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:22.827 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:22.827 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.827 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.827 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.728 13:53:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:24.728 00:09:24.728 real 0m9.251s 00:09:24.728 user 0m7.163s 00:09:24.728 sys 0m4.435s 00:09:24.728 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:24.728 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:24.728 ************************************ 00:09:24.728 END TEST nvmf_target_discovery 00:09:24.728 ************************************ 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:24.988 ************************************ 00:09:24.988 START TEST nvmf_referrals 00:09:24.988 ************************************ 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:24.988 * Looking for test storage... 00:09:24.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.988 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:24.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.989 --rc genhtml_branch_coverage=1 00:09:24.989 --rc genhtml_function_coverage=1 00:09:24.989 --rc genhtml_legend=1 00:09:24.989 --rc geninfo_all_blocks=1 00:09:24.989 --rc geninfo_unexecuted_blocks=1 00:09:24.989 00:09:24.989 ' 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:24.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.989 --rc genhtml_branch_coverage=1 00:09:24.989 --rc genhtml_function_coverage=1 00:09:24.989 --rc genhtml_legend=1 00:09:24.989 --rc geninfo_all_blocks=1 00:09:24.989 --rc geninfo_unexecuted_blocks=1 00:09:24.989 00:09:24.989 ' 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:24.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.989 --rc genhtml_branch_coverage=1 00:09:24.989 --rc genhtml_function_coverage=1 00:09:24.989 --rc genhtml_legend=1 00:09:24.989 --rc geninfo_all_blocks=1 00:09:24.989 --rc geninfo_unexecuted_blocks=1 00:09:24.989 00:09:24.989 ' 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:24.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.989 --rc genhtml_branch_coverage=1 00:09:24.989 --rc genhtml_function_coverage=1 00:09:24.989 --rc genhtml_legend=1 00:09:24.989 --rc geninfo_all_blocks=1 00:09:24.989 --rc geninfo_unexecuted_blocks=1 00:09:24.989 00:09:24.989 ' 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:24.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:09:24.989 13:53:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:31.555 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:31.555 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:09:31.555 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:31.555 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:31.555 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:31.555 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:31.555 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:31.555 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:09:31.555 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:31.555 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:09:31.555 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:09:31.555 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:09:31.555 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:09:31.555 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:09:31.555 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:09:31.555 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:31.555 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:31.555 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:31.555 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:31.555 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:31.555 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:31.556 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:31.556 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:31.556 Found net devices under 0000:31:00.0: cvl_0_0 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:31.556 Found net devices under 0000:31:00.1: cvl_0_1 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:31.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:09:31.556 00:09:31.556 --- 10.0.0.2 ping statistics --- 00:09:31.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.556 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:09:31.556 13:53:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:31.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:09:31.556 00:09:31.556 --- 10.0.0.1 ping statistics --- 00:09:31.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.556 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:09:31.556 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.556 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:09:31.556 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:31.556 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.556 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:31.556 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:31.556 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.556 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:31.556 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:31.556 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:31.556 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:31.556 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:31.556 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:31.556 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=741935 00:09:31.556 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 741935 00:09:31.556 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 741935 ']' 00:09:31.557 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.557 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:31.557 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.557 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:31.557 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:31.557 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:31.557 [2024-11-06 13:53:10.073077] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:09:31.557 [2024-11-06 13:53:10.073140] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.557 [2024-11-06 13:53:10.164560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:31.557 [2024-11-06 13:53:10.218008] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.557 [2024-11-06 13:53:10.218066] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.557 [2024-11-06 13:53:10.218074] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:31.557 [2024-11-06 13:53:10.218082] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:31.557 [2024-11-06 13:53:10.218088] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.557 [2024-11-06 13:53:10.220471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.557 [2024-11-06 13:53:10.220630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:31.557 [2024-11-06 13:53:10.220772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:31.557 [2024-11-06 13:53:10.220773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:31.817 [2024-11-06 13:53:10.894748] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:31.817 [2024-11-06 13:53:10.914576] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:31.817 13:53:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.817 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:31.817 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:31.817 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:31.817 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:31.817 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:31.817 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:31.817 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:31.817 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:32.077 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:32.077 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:32.077 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:09:32.077 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.077 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:32.077 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.077 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:09:32.077 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.077 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:32.077 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.077 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:09:32.077 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.077 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:32.077 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.077 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:32.077 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.077 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:09:32.077 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:32.077 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.077 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:32.077 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:32.077 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:32.077 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:32.077 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:32.077 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:32.077 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:32.336 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:32.336 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:32.336 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:09:32.336 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.336 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:32.336 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.336 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:32.336 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.336 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:32.336 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.336 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:32.336 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:32.336 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:32.336 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:32.336 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.336 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:32.336 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:32.336 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.336 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:32.336 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:32.336 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:32.336 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:32.336 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:32.336 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:32.336 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:32.336 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:32.595 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:32.595 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:32.595 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:32.595 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:32.595 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:32.595 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:32.595 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:32.595 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:32.595 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:32.595 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:32.595 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:32.595 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:32.595 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:32.853 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:32.853 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:32.853 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.853 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:32.853 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.853 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:32.853 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:32.853 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:32.853 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.853 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:32.853 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:32.853 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:32.853 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.853 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:32.853 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:32.853 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:32.853 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:32.853 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:32.853 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:32.853 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:32.853 13:53:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:32.853 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:32.853 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:32.853 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:32.853 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:32.853 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:32.853 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:32.853 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:33.112 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:33.112 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:33.112 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:33.112 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:33.112 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:33.112 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:33.112 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:33.112 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:33.112 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.112 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:33.112 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.112 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:33.112 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:09:33.112 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.112 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:33.112 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.371 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:33.371 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:33.371 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:33.371 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:33.371 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:33.371 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:33.371 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:33.371 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:33.371 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:33.371 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:33.371 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:09:33.371 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:33.371 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:09:33.371 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:33.371 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:09:33.371 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:33.371 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:33.371 rmmod nvme_tcp 00:09:33.371 rmmod nvme_fabrics 00:09:33.371 rmmod nvme_keyring 00:09:33.371 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:33.371 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:09:33.371 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:09:33.371 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 741935 ']' 00:09:33.371 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 741935 00:09:33.371 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 741935 ']' 00:09:33.371 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 741935 00:09:33.371 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:09:33.371 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:33.371 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 741935 00:09:33.630 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:33.630 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:33.630 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 741935' 00:09:33.630 killing process with pid 741935 00:09:33.630 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 741935 00:09:33.630 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 741935 00:09:33.630 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:33.630 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:33.630 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:33.630 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:09:33.630 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:09:33.630 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:33.630 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:09:33.630 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:33.630 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:33.630 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.631 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.631 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.213 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:36.213 00:09:36.213 real 0m10.799s 00:09:36.213 user 0m12.518s 00:09:36.213 sys 0m4.849s 00:09:36.213 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:36.213 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.213 ************************************ 00:09:36.213 END TEST nvmf_referrals 00:09:36.213 ************************************ 00:09:36.213 13:53:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:36.213 13:53:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:36.213 13:53:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:36.213 13:53:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:36.213 ************************************ 00:09:36.213 START TEST nvmf_connect_disconnect 00:09:36.213 ************************************ 00:09:36.213 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:36.213 * Looking for test storage... 00:09:36.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:36.213 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:36.213 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:36.213 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:09:36.213 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:36.213 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.213 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.213 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.213 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.213 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.213 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.213 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.213 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.213 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.213 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.213 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.213 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:09:36.213 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:09:36.213 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.213 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.213 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:09:36.213 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:09:36.213 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.213 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:09:36.213 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.213 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:09:36.213 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:09:36.213 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.213 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:09:36.213 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.213 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.213 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:36.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.214 --rc genhtml_branch_coverage=1 00:09:36.214 --rc genhtml_function_coverage=1 00:09:36.214 --rc genhtml_legend=1 00:09:36.214 --rc geninfo_all_blocks=1 00:09:36.214 --rc geninfo_unexecuted_blocks=1 00:09:36.214 00:09:36.214 ' 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:36.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.214 --rc genhtml_branch_coverage=1 00:09:36.214 --rc genhtml_function_coverage=1 00:09:36.214 --rc genhtml_legend=1 00:09:36.214 --rc geninfo_all_blocks=1 00:09:36.214 --rc geninfo_unexecuted_blocks=1 00:09:36.214 00:09:36.214 ' 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:36.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.214 --rc genhtml_branch_coverage=1 00:09:36.214 --rc genhtml_function_coverage=1 00:09:36.214 --rc genhtml_legend=1 00:09:36.214 --rc geninfo_all_blocks=1 00:09:36.214 --rc geninfo_unexecuted_blocks=1 00:09:36.214 00:09:36.214 ' 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:36.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.214 --rc genhtml_branch_coverage=1 00:09:36.214 --rc genhtml_function_coverage=1 00:09:36.214 --rc genhtml_legend=1 00:09:36.214 --rc geninfo_all_blocks=1 00:09:36.214 --rc geninfo_unexecuted_blocks=1 00:09:36.214 00:09:36.214 ' 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:36.214 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:09:36.214 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:41.567 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:41.567 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:41.567 Found net devices under 0000:31:00.0: cvl_0_0 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:41.567 Found net devices under 0000:31:00.1: cvl_0_1 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:41.567 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:41.568 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:41.568 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:41.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:41.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.513 ms 00:09:41.568 00:09:41.568 --- 10.0.0.2 ping statistics --- 00:09:41.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.568 rtt min/avg/max/mdev = 0.513/0.513/0.513/0.000 ms 00:09:41.568 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:41.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:41.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:09:41.568 00:09:41.568 --- 10.0.0.1 ping statistics --- 00:09:41.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.568 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:09:41.568 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:41.568 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:09:41.568 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:41.568 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:41.568 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:41.568 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:41.568 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:41.568 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:41.568 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:41.568 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:41.568 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:41.568 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:41.568 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:41.568 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=747040 00:09:41.568 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 747040 00:09:41.568 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:41.568 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 747040 ']' 00:09:41.568 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.568 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:41.568 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.568 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:41.568 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:41.568 [2024-11-06 13:53:20.482947] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:09:41.568 [2024-11-06 13:53:20.482995] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.568 [2024-11-06 13:53:20.566834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:41.568 [2024-11-06 13:53:20.603977] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:41.568 [2024-11-06 13:53:20.604010] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:41.568 [2024-11-06 13:53:20.604018] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:41.568 [2024-11-06 13:53:20.604024] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:41.568 [2024-11-06 13:53:20.604030] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:41.568 [2024-11-06 13:53:20.605559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.568 [2024-11-06 13:53:20.605709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:41.568 [2024-11-06 13:53:20.605857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.568 [2024-11-06 13:53:20.605857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:42.137 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:42.137 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:09:42.137 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:42.137 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:42.137 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:42.137 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:42.137 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:42.137 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.137 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:42.137 [2024-11-06 13:53:21.292631] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:42.137 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.137 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:42.137 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.137 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:42.137 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.137 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:42.137 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:42.137 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.137 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:42.137 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.137 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:42.137 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.137 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:42.137 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.137 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:42.137 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.137 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:42.137 [2024-11-06 13:53:21.360055] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:42.137 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.137 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:09:42.137 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:09:42.137 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:46.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.408 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.408 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:00.408 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:00.408 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:00.408 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:10:00.408 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:00.408 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:10:00.408 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:00.408 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:00.408 rmmod nvme_tcp 00:10:00.408 rmmod nvme_fabrics 00:10:00.408 rmmod nvme_keyring 00:10:00.408 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:00.408 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:10:00.408 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:10:00.408 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 747040 ']' 00:10:00.408 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 747040 00:10:00.408 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 747040 ']' 00:10:00.408 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 747040 00:10:00.408 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:10:00.408 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:00.408 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 747040 00:10:00.408 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:00.408 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:00.408 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 747040' 00:10:00.408 killing process with pid 747040 00:10:00.408 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 747040 00:10:00.408 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 747040 00:10:00.408 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:00.408 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:00.408 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:00.408 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:10:00.408 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:10:00.408 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:10:00.408 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:00.408 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:00.408 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:00.408 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.408 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.408 13:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:02.317 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:02.317 00:10:02.317 real 0m26.603s 00:10:02.317 user 1m16.443s 00:10:02.317 sys 0m5.048s 00:10:02.317 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:02.317 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:02.317 ************************************ 00:10:02.317 END TEST nvmf_connect_disconnect 00:10:02.317 ************************************ 00:10:02.317 13:53:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:02.317 13:53:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:02.317 13:53:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:02.317 13:53:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:02.317 ************************************ 00:10:02.317 START TEST nvmf_multitarget 00:10:02.317 ************************************ 00:10:02.317 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:02.577 * Looking for test storage... 00:10:02.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:02.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.578 --rc genhtml_branch_coverage=1 00:10:02.578 --rc genhtml_function_coverage=1 00:10:02.578 --rc genhtml_legend=1 00:10:02.578 --rc geninfo_all_blocks=1 00:10:02.578 --rc geninfo_unexecuted_blocks=1 00:10:02.578 00:10:02.578 ' 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:02.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.578 --rc genhtml_branch_coverage=1 00:10:02.578 --rc genhtml_function_coverage=1 00:10:02.578 --rc genhtml_legend=1 00:10:02.578 --rc geninfo_all_blocks=1 00:10:02.578 --rc geninfo_unexecuted_blocks=1 00:10:02.578 00:10:02.578 ' 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:02.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.578 --rc genhtml_branch_coverage=1 00:10:02.578 --rc genhtml_function_coverage=1 00:10:02.578 --rc genhtml_legend=1 00:10:02.578 --rc geninfo_all_blocks=1 00:10:02.578 --rc geninfo_unexecuted_blocks=1 00:10:02.578 00:10:02.578 ' 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:02.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.578 --rc genhtml_branch_coverage=1 00:10:02.578 --rc genhtml_function_coverage=1 00:10:02.578 --rc genhtml_legend=1 00:10:02.578 --rc geninfo_all_blocks=1 00:10:02.578 --rc geninfo_unexecuted_blocks=1 00:10:02.578 00:10:02.578 ' 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:02.578 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:02.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:02.579 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:02.579 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:02.579 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:02.579 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:02.579 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:10:02.579 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:02.579 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:02.579 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:02.579 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:02.579 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:02.579 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.579 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:02.579 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:02.579 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:02.579 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:02.579 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:10:02.579 13:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:07.862 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.862 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:07.863 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:07.863 Found net devices under 0000:31:00.0: cvl_0_0 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:07.863 Found net devices under 0000:31:00.1: cvl_0_1 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:07.863 13:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:07.863 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:07.863 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:07.863 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:07.863 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:07.863 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:07.863 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:07.863 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:07.863 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:07.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:07.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.549 ms 00:10:07.863 00:10:07.863 --- 10.0.0.2 ping statistics --- 00:10:07.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.863 rtt min/avg/max/mdev = 0.549/0.549/0.549/0.000 ms 00:10:07.863 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:07.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:07.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:10:07.863 00:10:07.863 --- 10.0.0.1 ping statistics --- 00:10:07.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.863 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:10:07.863 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:07.863 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:10:07.863 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:07.863 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:07.863 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:07.863 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:07.863 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:07.863 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:07.863 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:07.863 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:07.863 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:07.863 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:07.863 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:07.863 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=755744 00:10:07.863 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 755744 00:10:07.863 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 755744 ']' 00:10:07.863 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.863 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:07.863 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:07.863 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.863 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:07.863 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:08.123 [2024-11-06 13:53:47.172235] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:10:08.123 [2024-11-06 13:53:47.172289] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.123 [2024-11-06 13:53:47.255896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:08.123 [2024-11-06 13:53:47.291674] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:08.123 [2024-11-06 13:53:47.291706] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:08.123 [2024-11-06 13:53:47.291714] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:08.123 [2024-11-06 13:53:47.291721] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:08.123 [2024-11-06 13:53:47.291727] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:08.123 [2024-11-06 13:53:47.293426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.123 [2024-11-06 13:53:47.293580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:08.123 [2024-11-06 13:53:47.293734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.123 [2024-11-06 13:53:47.293734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:08.691 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:08.691 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:10:08.691 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:08.691 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:08.691 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:08.691 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:08.691 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:08.950 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:08.950 13:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:10:08.950 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:08.950 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:08.950 "nvmf_tgt_1" 00:10:08.950 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:08.950 "nvmf_tgt_2" 00:10:08.950 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:08.950 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:10:09.210 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:09.210 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:09.210 true 00:10:09.210 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:09.210 true 00:10:09.210 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:10:09.210 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:09.469 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:09.469 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:09.469 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:10:09.469 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:09.469 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:10:09.469 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:09.469 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:10:09.469 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:09.469 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:09.469 rmmod nvme_tcp 00:10:09.469 rmmod nvme_fabrics 00:10:09.469 rmmod nvme_keyring 00:10:09.469 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:09.469 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:10:09.469 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:10:09.469 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 755744 ']' 00:10:09.469 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 755744 00:10:09.469 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 755744 ']' 00:10:09.469 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 755744 00:10:09.469 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:10:09.469 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:09.469 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 755744 00:10:09.469 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:09.469 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:09.469 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 755744' 00:10:09.469 killing process with pid 755744 00:10:09.469 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 755744 00:10:09.469 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 755744 00:10:09.728 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:09.728 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:09.728 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:09.728 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:10:09.728 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:10:09.728 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:10:09.728 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:09.729 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:09.729 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:09.729 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.729 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.729 13:53:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.636 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:11.636 00:10:11.636 real 0m9.289s 00:10:11.636 user 0m7.971s 00:10:11.636 sys 0m4.516s 00:10:11.636 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:11.636 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:11.636 ************************************ 00:10:11.636 END TEST nvmf_multitarget 00:10:11.636 ************************************ 00:10:11.636 13:53:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:11.636 13:53:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:11.636 13:53:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:11.636 13:53:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:11.636 ************************************ 00:10:11.636 START TEST nvmf_rpc 00:10:11.636 ************************************ 00:10:11.636 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:11.896 * Looking for test storage... 00:10:11.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.896 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:11.896 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:10:11.896 13:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:11.896 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:11.896 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.896 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.896 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.896 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.896 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.896 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.896 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.896 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.896 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.896 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.896 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.896 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:11.896 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:10:11.896 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.896 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.896 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:11.896 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:10:11.896 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.896 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:10:11.896 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.896 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:11.896 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:10:11.896 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.896 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:10:11.896 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.896 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.896 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.896 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:10:11.896 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.896 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:11.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.897 --rc genhtml_branch_coverage=1 00:10:11.897 --rc genhtml_function_coverage=1 00:10:11.897 --rc genhtml_legend=1 00:10:11.897 --rc geninfo_all_blocks=1 00:10:11.897 --rc geninfo_unexecuted_blocks=1 00:10:11.897 00:10:11.897 ' 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:11.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.897 --rc genhtml_branch_coverage=1 00:10:11.897 --rc genhtml_function_coverage=1 00:10:11.897 --rc genhtml_legend=1 00:10:11.897 --rc geninfo_all_blocks=1 00:10:11.897 --rc geninfo_unexecuted_blocks=1 00:10:11.897 00:10:11.897 ' 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:11.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.897 --rc genhtml_branch_coverage=1 00:10:11.897 --rc genhtml_function_coverage=1 00:10:11.897 --rc genhtml_legend=1 00:10:11.897 --rc geninfo_all_blocks=1 00:10:11.897 --rc geninfo_unexecuted_blocks=1 00:10:11.897 00:10:11.897 ' 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:11.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.897 --rc genhtml_branch_coverage=1 00:10:11.897 --rc genhtml_function_coverage=1 00:10:11.897 --rc genhtml_legend=1 00:10:11.897 --rc geninfo_all_blocks=1 00:10:11.897 --rc geninfo_unexecuted_blocks=1 00:10:11.897 00:10:11.897 ' 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:11.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:10:11.897 13:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:17.168 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:17.168 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:17.169 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:17.169 Found net devices under 0000:31:00.0: cvl_0_0 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:17.169 Found net devices under 0000:31:00.1: cvl_0_1 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:17.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:17.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:10:17.169 00:10:17.169 --- 10.0.0.2 ping statistics --- 00:10:17.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.169 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:10:17.169 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:17.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:17.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:10:17.429 00:10:17.429 --- 10.0.0.1 ping statistics --- 00:10:17.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.429 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:10:17.429 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:17.429 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:10:17.429 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:17.429 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:17.429 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:17.429 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:17.429 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:17.429 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:17.429 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:17.429 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:17.429 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:17.429 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:17.429 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.429 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=760529 00:10:17.429 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 760529 00:10:17.429 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 760529 ']' 00:10:17.429 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.429 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:17.429 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.429 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:17.429 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.429 13:53:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:17.429 [2024-11-06 13:53:56.519635] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:10:17.429 [2024-11-06 13:53:56.519682] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:17.429 [2024-11-06 13:53:56.603464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:17.429 [2024-11-06 13:53:56.639970] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:17.429 [2024-11-06 13:53:56.640002] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:17.429 [2024-11-06 13:53:56.640010] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:17.429 [2024-11-06 13:53:56.640017] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:17.429 [2024-11-06 13:53:56.640022] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:17.429 [2024-11-06 13:53:56.641797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.429 [2024-11-06 13:53:56.641932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:17.429 [2024-11-06 13:53:56.642059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.429 [2024-11-06 13:53:56.642060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:18.367 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:18.367 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:10:18.367 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:18.367 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:18.367 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.367 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:18.367 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:18.367 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.367 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.367 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.367 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:18.367 "tick_rate": 2400000000, 00:10:18.367 "poll_groups": [ 00:10:18.367 { 00:10:18.367 "name": "nvmf_tgt_poll_group_000", 00:10:18.367 "admin_qpairs": 0, 00:10:18.367 "io_qpairs": 0, 00:10:18.367 "current_admin_qpairs": 0, 00:10:18.367 "current_io_qpairs": 0, 00:10:18.367 "pending_bdev_io": 0, 00:10:18.367 "completed_nvme_io": 0, 00:10:18.367 "transports": [] 00:10:18.367 }, 00:10:18.367 { 00:10:18.367 "name": "nvmf_tgt_poll_group_001", 00:10:18.367 "admin_qpairs": 0, 00:10:18.367 "io_qpairs": 0, 00:10:18.367 "current_admin_qpairs": 0, 00:10:18.367 "current_io_qpairs": 0, 00:10:18.367 "pending_bdev_io": 0, 00:10:18.367 "completed_nvme_io": 0, 00:10:18.367 "transports": [] 00:10:18.367 }, 00:10:18.367 { 00:10:18.367 "name": "nvmf_tgt_poll_group_002", 00:10:18.367 "admin_qpairs": 0, 00:10:18.367 "io_qpairs": 0, 00:10:18.367 "current_admin_qpairs": 0, 00:10:18.367 "current_io_qpairs": 0, 00:10:18.367 "pending_bdev_io": 0, 00:10:18.367 "completed_nvme_io": 0, 00:10:18.367 "transports": [] 00:10:18.367 }, 00:10:18.367 { 00:10:18.367 "name": "nvmf_tgt_poll_group_003", 00:10:18.367 "admin_qpairs": 0, 00:10:18.367 "io_qpairs": 0, 00:10:18.367 "current_admin_qpairs": 0, 00:10:18.367 "current_io_qpairs": 0, 00:10:18.367 "pending_bdev_io": 0, 00:10:18.367 "completed_nvme_io": 0, 00:10:18.367 "transports": [] 00:10:18.367 } 00:10:18.367 ] 00:10:18.367 }' 00:10:18.367 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:18.367 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:18.367 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:18.367 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:18.367 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:18.367 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:18.367 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:18.367 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:18.367 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.367 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.367 [2024-11-06 13:53:57.410854] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:18.367 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.367 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:18.367 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.367 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.367 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.367 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:18.367 "tick_rate": 2400000000, 00:10:18.367 "poll_groups": [ 00:10:18.367 { 00:10:18.367 "name": "nvmf_tgt_poll_group_000", 00:10:18.367 "admin_qpairs": 0, 00:10:18.367 "io_qpairs": 0, 00:10:18.367 "current_admin_qpairs": 0, 00:10:18.367 "current_io_qpairs": 0, 00:10:18.367 "pending_bdev_io": 0, 00:10:18.367 "completed_nvme_io": 0, 00:10:18.367 "transports": [ 00:10:18.367 { 00:10:18.367 "trtype": "TCP" 00:10:18.367 } 00:10:18.367 ] 00:10:18.367 }, 00:10:18.367 { 00:10:18.367 "name": "nvmf_tgt_poll_group_001", 00:10:18.367 "admin_qpairs": 0, 00:10:18.368 "io_qpairs": 0, 00:10:18.368 "current_admin_qpairs": 0, 00:10:18.368 "current_io_qpairs": 0, 00:10:18.368 "pending_bdev_io": 0, 00:10:18.368 "completed_nvme_io": 0, 00:10:18.368 "transports": [ 00:10:18.368 { 00:10:18.368 "trtype": "TCP" 00:10:18.368 } 00:10:18.368 ] 00:10:18.368 }, 00:10:18.368 { 00:10:18.368 "name": "nvmf_tgt_poll_group_002", 00:10:18.368 "admin_qpairs": 0, 00:10:18.368 "io_qpairs": 0, 00:10:18.368 "current_admin_qpairs": 0, 00:10:18.368 "current_io_qpairs": 0, 00:10:18.368 "pending_bdev_io": 0, 00:10:18.368 "completed_nvme_io": 0, 00:10:18.368 "transports": [ 00:10:18.368 { 00:10:18.368 "trtype": "TCP" 00:10:18.368 } 00:10:18.368 ] 00:10:18.368 }, 00:10:18.368 { 00:10:18.368 "name": "nvmf_tgt_poll_group_003", 00:10:18.368 "admin_qpairs": 0, 00:10:18.368 "io_qpairs": 0, 00:10:18.368 "current_admin_qpairs": 0, 00:10:18.368 "current_io_qpairs": 0, 00:10:18.368 "pending_bdev_io": 0, 00:10:18.368 "completed_nvme_io": 0, 00:10:18.368 "transports": [ 00:10:18.368 { 00:10:18.368 "trtype": "TCP" 00:10:18.368 } 00:10:18.368 ] 00:10:18.368 } 00:10:18.368 ] 00:10:18.368 }' 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.368 Malloc1 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.368 [2024-11-06 13:53:57.563598] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.2 -s 4420 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.2 -s 4420 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.2 -s 4420 00:10:18.368 [2024-11-06 13:53:57.592447] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb' 00:10:18.368 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:18.368 could not add new controller: failed to write to nvme-fabrics device 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.368 13:53:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:20.275 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:20.275 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:10:20.275 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:20.275 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:20.275 13:53:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:22.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:22.182 [2024-11-06 13:54:01.185734] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb' 00:10:22.182 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:22.182 could not add new controller: failed to write to nvme-fabrics device 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.182 13:54:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:23.559 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:23.559 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:10:23.559 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:23.559 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:23.559 13:54:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:10:25.466 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:25.466 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:25.466 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:25.466 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:25.466 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:25.466 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:10:25.466 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:25.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.466 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:25.466 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:10:25.466 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:25.466 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:25.725 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:25.725 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:25.725 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:10:25.725 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:25.725 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.725 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.725 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.725 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:10:25.725 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:25.725 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:25.725 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.725 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.725 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.725 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:25.725 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.725 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.725 [2024-11-06 13:54:04.790006] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:25.725 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.725 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:25.725 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.725 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.725 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.725 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:25.725 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.725 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.725 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.725 13:54:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:27.102 13:54:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:27.102 13:54:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:10:27.102 13:54:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:27.102 13:54:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:27.102 13:54:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:10:29.003 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:29.003 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:29.003 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:29.003 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:29.003 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:29.003 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:10:29.003 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:29.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.262 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:29.262 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:10:29.262 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:29.262 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:29.262 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:29.262 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:29.262 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:10:29.262 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:29.262 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.262 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:29.262 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.262 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:29.262 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.262 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:29.262 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.262 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:29.262 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:29.262 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.262 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:29.262 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.262 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:29.262 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.262 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:29.262 [2024-11-06 13:54:08.351137] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:29.262 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.262 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:29.262 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.262 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:29.262 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.262 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:29.262 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.262 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:29.262 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.262 13:54:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:30.639 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:30.639 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:10:30.639 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:30.639 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:30.639 13:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:10:33.175 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:33.175 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:33.175 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:33.175 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:33.175 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:33.175 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:10:33.175 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:33.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.175 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:33.175 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:10:33.175 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:33.175 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:33.175 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:33.175 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:33.175 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:10:33.175 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:33.175 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.175 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.175 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.175 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:33.175 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.175 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.175 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.175 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:33.175 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:33.175 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.175 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.175 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.175 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:33.175 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.175 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.175 [2024-11-06 13:54:11.989925] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:33.175 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.175 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:33.175 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.175 13:54:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.175 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.175 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:33.175 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.175 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.175 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.175 13:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:34.553 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:34.553 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:10:34.553 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:34.553 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:34.553 13:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:36.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:36.456 [2024-11-06 13:54:15.592810] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.456 13:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:38.391 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:38.391 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:10:38.391 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:38.391 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:38.391 13:54:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:10:39.851 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:39.852 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:39.852 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:39.852 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:40.111 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:40.111 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:10:40.111 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:40.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.111 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:40.111 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:10:40.111 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:40.111 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:40.111 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:40.111 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:40.111 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:10:40.111 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:40.111 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.111 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.111 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.111 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:40.111 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.111 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.111 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.111 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:40.111 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:40.111 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.111 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.111 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.111 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:40.111 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.111 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.111 [2024-11-06 13:54:19.235446] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:40.111 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.111 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:40.111 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.111 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.111 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.111 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:40.111 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.111 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.111 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.111 13:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:41.486 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:41.486 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:10:41.486 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:41.486 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:41.486 13:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:44.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.030 [2024-11-06 13:54:22.891399] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.030 [2024-11-06 13:54:22.939508] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.030 [2024-11-06 13:54:22.987623] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.030 13:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.030 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:44.030 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.030 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.030 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.030 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.030 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.030 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.030 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.030 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:44.030 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.030 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.030 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.030 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.031 [2024-11-06 13:54:23.035782] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.031 [2024-11-06 13:54:23.083939] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:10:44.031 "tick_rate": 2400000000, 00:10:44.031 "poll_groups": [ 00:10:44.031 { 00:10:44.031 "name": "nvmf_tgt_poll_group_000", 00:10:44.031 "admin_qpairs": 0, 00:10:44.031 "io_qpairs": 224, 00:10:44.031 "current_admin_qpairs": 0, 00:10:44.031 "current_io_qpairs": 0, 00:10:44.031 "pending_bdev_io": 0, 00:10:44.031 "completed_nvme_io": 521, 00:10:44.031 "transports": [ 00:10:44.031 { 00:10:44.031 "trtype": "TCP" 00:10:44.031 } 00:10:44.031 ] 00:10:44.031 }, 00:10:44.031 { 00:10:44.031 "name": "nvmf_tgt_poll_group_001", 00:10:44.031 "admin_qpairs": 1, 00:10:44.031 "io_qpairs": 223, 00:10:44.031 "current_admin_qpairs": 0, 00:10:44.031 "current_io_qpairs": 0, 00:10:44.031 "pending_bdev_io": 0, 00:10:44.031 "completed_nvme_io": 224, 00:10:44.031 "transports": [ 00:10:44.031 { 00:10:44.031 "trtype": "TCP" 00:10:44.031 } 00:10:44.031 ] 00:10:44.031 }, 00:10:44.031 { 00:10:44.031 "name": "nvmf_tgt_poll_group_002", 00:10:44.031 "admin_qpairs": 6, 00:10:44.031 "io_qpairs": 218, 00:10:44.031 "current_admin_qpairs": 0, 00:10:44.031 "current_io_qpairs": 0, 00:10:44.031 "pending_bdev_io": 0, 00:10:44.031 "completed_nvme_io": 218, 00:10:44.031 "transports": [ 00:10:44.031 { 00:10:44.031 "trtype": "TCP" 00:10:44.031 } 00:10:44.031 ] 00:10:44.031 }, 00:10:44.031 { 00:10:44.031 "name": "nvmf_tgt_poll_group_003", 00:10:44.031 "admin_qpairs": 0, 00:10:44.031 "io_qpairs": 224, 00:10:44.031 "current_admin_qpairs": 0, 00:10:44.031 "current_io_qpairs": 0, 00:10:44.031 "pending_bdev_io": 0, 00:10:44.031 "completed_nvme_io": 276, 00:10:44.031 "transports": [ 00:10:44.031 { 00:10:44.031 "trtype": "TCP" 00:10:44.031 } 00:10:44.031 ] 00:10:44.031 } 00:10:44.031 ] 00:10:44.031 }' 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:44.031 rmmod nvme_tcp 00:10:44.031 rmmod nvme_fabrics 00:10:44.031 rmmod nvme_keyring 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 760529 ']' 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 760529 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 760529 ']' 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 760529 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 760529 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:44.031 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 760529' 00:10:44.032 killing process with pid 760529 00:10:44.032 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 760529 00:10:44.032 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 760529 00:10:44.291 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:44.291 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:44.291 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:44.291 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:10:44.291 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:44.291 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:10:44.291 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:10:44.291 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:44.291 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:44.291 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.291 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.291 13:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.195 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:46.195 00:10:46.195 real 0m34.561s 00:10:46.195 user 1m47.953s 00:10:46.195 sys 0m5.722s 00:10:46.195 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:46.195 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.195 ************************************ 00:10:46.195 END TEST nvmf_rpc 00:10:46.195 ************************************ 00:10:46.454 13:54:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:46.454 13:54:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:46.454 13:54:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:46.454 13:54:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:46.454 ************************************ 00:10:46.454 START TEST nvmf_invalid 00:10:46.454 ************************************ 00:10:46.454 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:46.454 * Looking for test storage... 00:10:46.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:46.454 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:46.454 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:10:46.454 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:46.454 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:46.454 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:46.454 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:46.454 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:46.454 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.454 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:10:46.454 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:10:46.454 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:10:46.454 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:10:46.454 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:10:46.454 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:10:46.454 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:46.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.455 --rc genhtml_branch_coverage=1 00:10:46.455 --rc genhtml_function_coverage=1 00:10:46.455 --rc genhtml_legend=1 00:10:46.455 --rc geninfo_all_blocks=1 00:10:46.455 --rc geninfo_unexecuted_blocks=1 00:10:46.455 00:10:46.455 ' 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:46.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.455 --rc genhtml_branch_coverage=1 00:10:46.455 --rc genhtml_function_coverage=1 00:10:46.455 --rc genhtml_legend=1 00:10:46.455 --rc geninfo_all_blocks=1 00:10:46.455 --rc geninfo_unexecuted_blocks=1 00:10:46.455 00:10:46.455 ' 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:46.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.455 --rc genhtml_branch_coverage=1 00:10:46.455 --rc genhtml_function_coverage=1 00:10:46.455 --rc genhtml_legend=1 00:10:46.455 --rc geninfo_all_blocks=1 00:10:46.455 --rc geninfo_unexecuted_blocks=1 00:10:46.455 00:10:46.455 ' 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:46.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.455 --rc genhtml_branch_coverage=1 00:10:46.455 --rc genhtml_function_coverage=1 00:10:46.455 --rc genhtml_legend=1 00:10:46.455 --rc geninfo_all_blocks=1 00:10:46.455 --rc geninfo_unexecuted_blocks=1 00:10:46.455 00:10:46.455 ' 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:46.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.455 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.456 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:46.456 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:46.456 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:10:46.456 13:54:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:51.735 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:51.735 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:51.735 Found net devices under 0000:31:00.0: cvl_0_0 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:51.735 Found net devices under 0000:31:00.1: cvl_0_1 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:51.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:51.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:10:51.735 00:10:51.735 --- 10.0.0.2 ping statistics --- 00:10:51.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.735 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:10:51.735 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:51.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:51.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:10:51.735 00:10:51.735 --- 10.0.0.1 ping statistics --- 00:10:51.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.736 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:10:51.736 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:51.736 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:10:51.736 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:51.736 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:51.736 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:51.736 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:51.736 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:51.736 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:51.736 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:51.736 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:10:51.736 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:51.736 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:51.736 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:51.736 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=771266 00:10:51.736 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 771266 00:10:51.736 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 771266 ']' 00:10:51.736 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.736 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:51.736 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.736 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:51.736 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:51.736 13:54:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:51.736 [2024-11-06 13:54:30.996777] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:10:51.736 [2024-11-06 13:54:30.996828] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.996 [2024-11-06 13:54:31.082867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:51.996 [2024-11-06 13:54:31.124530] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.996 [2024-11-06 13:54:31.124570] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.996 [2024-11-06 13:54:31.124579] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:51.996 [2024-11-06 13:54:31.124586] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:51.996 [2024-11-06 13:54:31.124592] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.996 [2024-11-06 13:54:31.126304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:51.996 [2024-11-06 13:54:31.126458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:51.996 [2024-11-06 13:54:31.126581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.996 [2024-11-06 13:54:31.126581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:52.565 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:52.565 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:10:52.566 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:52.566 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:52.566 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:52.566 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:52.566 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:52.566 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode5879 00:10:52.825 [2024-11-06 13:54:31.941293] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:10:52.825 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:10:52.825 { 00:10:52.825 "nqn": "nqn.2016-06.io.spdk:cnode5879", 00:10:52.825 "tgt_name": "foobar", 00:10:52.825 "method": "nvmf_create_subsystem", 00:10:52.825 "req_id": 1 00:10:52.825 } 00:10:52.825 Got JSON-RPC error response 00:10:52.825 response: 00:10:52.825 { 00:10:52.825 "code": -32603, 00:10:52.825 "message": "Unable to find target foobar" 00:10:52.825 }' 00:10:52.826 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:10:52.826 { 00:10:52.826 "nqn": "nqn.2016-06.io.spdk:cnode5879", 00:10:52.826 "tgt_name": "foobar", 00:10:52.826 "method": "nvmf_create_subsystem", 00:10:52.826 "req_id": 1 00:10:52.826 } 00:10:52.826 Got JSON-RPC error response 00:10:52.826 response: 00:10:52.826 { 00:10:52.826 "code": -32603, 00:10:52.826 "message": "Unable to find target foobar" 00:10:52.826 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:10:52.826 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:10:52.826 13:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode26448 00:10:52.826 [2024-11-06 13:54:32.105824] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26448: invalid serial number 'SPDKISFASTANDAWESOME' 00:10:53.086 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:10:53.086 { 00:10:53.086 "nqn": "nqn.2016-06.io.spdk:cnode26448", 00:10:53.086 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:53.086 "method": "nvmf_create_subsystem", 00:10:53.086 "req_id": 1 00:10:53.086 } 00:10:53.086 Got JSON-RPC error response 00:10:53.086 response: 00:10:53.086 { 00:10:53.086 "code": -32602, 00:10:53.086 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:53.086 }' 00:10:53.086 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:10:53.086 { 00:10:53.086 "nqn": "nqn.2016-06.io.spdk:cnode26448", 00:10:53.086 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:53.086 "method": "nvmf_create_subsystem", 00:10:53.086 "req_id": 1 00:10:53.086 } 00:10:53.086 Got JSON-RPC error response 00:10:53.086 response: 00:10:53.086 { 00:10:53.086 "code": -32602, 00:10:53.086 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:53.086 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:53.086 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:10:53.086 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode25517 00:10:53.086 [2024-11-06 13:54:32.266367] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25517: invalid model number 'SPDK_Controller' 00:10:53.086 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:10:53.086 { 00:10:53.086 "nqn": "nqn.2016-06.io.spdk:cnode25517", 00:10:53.086 "model_number": "SPDK_Controller\u001f", 00:10:53.086 "method": "nvmf_create_subsystem", 00:10:53.086 "req_id": 1 00:10:53.086 } 00:10:53.086 Got JSON-RPC error response 00:10:53.086 response: 00:10:53.086 { 00:10:53.086 "code": -32602, 00:10:53.086 "message": "Invalid MN SPDK_Controller\u001f" 00:10:53.086 }' 00:10:53.086 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:10:53.086 { 00:10:53.086 "nqn": "nqn.2016-06.io.spdk:cnode25517", 00:10:53.086 "model_number": "SPDK_Controller\u001f", 00:10:53.086 "method": "nvmf_create_subsystem", 00:10:53.086 "req_id": 1 00:10:53.086 } 00:10:53.086 Got JSON-RPC error response 00:10:53.086 response: 00:10:53.086 { 00:10:53.086 "code": -32602, 00:10:53.086 "message": "Invalid MN SPDK_Controller\u001f" 00:10:53.086 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:53.086 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:10:53.086 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:10:53.086 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:53.086 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:53.086 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:53.086 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:53.086 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.086 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:10:53.086 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:10:53.086 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:10:53.086 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.086 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.086 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:10:53.086 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:10:53.086 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:10:53.086 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.086 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.086 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:10:53.086 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.087 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.348 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:10:53.348 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:10:53.348 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:10:53.348 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.348 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.348 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:10:53.348 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:10:53.348 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:10:53.348 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.348 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.348 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:10:53.348 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:10:53.348 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:10:53.348 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.348 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.348 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ H == \- ]] 00:10:53.348 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'H[j`eF5@?[|kUpYW/=PLc' 00:10:53.348 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'H[j`eF5@?[|kUpYW/=PLc' nqn.2016-06.io.spdk:cnode8376 00:10:53.348 [2024-11-06 13:54:32.523179] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8376: invalid serial number 'H[j`eF5@?[|kUpYW/=PLc' 00:10:53.348 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:10:53.348 { 00:10:53.348 "nqn": "nqn.2016-06.io.spdk:cnode8376", 00:10:53.348 "serial_number": "H[j`eF5@?[|kUpYW/=PLc", 00:10:53.348 "method": "nvmf_create_subsystem", 00:10:53.348 "req_id": 1 00:10:53.348 } 00:10:53.348 Got JSON-RPC error response 00:10:53.348 response: 00:10:53.348 { 00:10:53.348 "code": -32602, 00:10:53.348 "message": "Invalid SN H[j`eF5@?[|kUpYW/=PLc" 00:10:53.348 }' 00:10:53.348 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:10:53.348 { 00:10:53.348 "nqn": "nqn.2016-06.io.spdk:cnode8376", 00:10:53.348 "serial_number": "H[j`eF5@?[|kUpYW/=PLc", 00:10:53.348 "method": "nvmf_create_subsystem", 00:10:53.348 "req_id": 1 00:10:53.348 } 00:10:53.348 Got JSON-RPC error response 00:10:53.348 response: 00:10:53.348 { 00:10:53.348 "code": -32602, 00:10:53.348 "message": "Invalid SN H[j`eF5@?[|kUpYW/=PLc" 00:10:53.348 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:53.348 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:10:53.348 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:10:53.348 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:53.348 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:53.348 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:53.348 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:53.348 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.348 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:10:53.348 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:10:53.348 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:10:53.348 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.348 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.349 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.609 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:10:53.609 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:10:53.609 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:10:53.609 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.609 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.609 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:10:53.609 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:10:53.609 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:10:53.609 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.609 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.609 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:10:53.609 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 5 == \- ]] 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '5;.|n^FY^=Y@~m2]16i}LB\1MUEJHB'\''@1&2)8#;~' 00:10:53.610 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '5;.|n^FY^=Y@~m2]16i}LB\1MUEJHB'\''@1&2)8#;~' nqn.2016-06.io.spdk:cnode8892 00:10:53.870 [2024-11-06 13:54:32.904432] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8892: invalid model number '5;.|n^FY^=Y@~m2]16i}LB\1MUEJHB'@1&2)8#;~' 00:10:53.870 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:10:53.870 { 00:10:53.870 "nqn": "nqn.2016-06.io.spdk:cnode8892", 00:10:53.870 "model_number": "5;.|n^FY^=Y@~m2]16i}LB\\1MUEJHB'\''@1&2)8#;~\u007f", 00:10:53.870 "method": "nvmf_create_subsystem", 00:10:53.870 "req_id": 1 00:10:53.870 } 00:10:53.870 Got JSON-RPC error response 00:10:53.870 response: 00:10:53.870 { 00:10:53.870 "code": -32602, 00:10:53.870 "message": "Invalid MN 5;.|n^FY^=Y@~m2]16i}LB\\1MUEJHB'\''@1&2)8#;~\u007f" 00:10:53.870 }' 00:10:53.870 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:10:53.870 { 00:10:53.870 "nqn": "nqn.2016-06.io.spdk:cnode8892", 00:10:53.870 "model_number": "5;.|n^FY^=Y@~m2]16i}LB\\1MUEJHB'@1&2)8#;~\u007f", 00:10:53.870 "method": "nvmf_create_subsystem", 00:10:53.870 "req_id": 1 00:10:53.870 } 00:10:53.870 Got JSON-RPC error response 00:10:53.870 response: 00:10:53.870 { 00:10:53.870 "code": -32602, 00:10:53.870 "message": "Invalid MN 5;.|n^FY^=Y@~m2]16i}LB\\1MUEJHB'@1&2)8#;~\u007f" 00:10:53.870 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:53.870 13:54:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:10:53.870 [2024-11-06 13:54:33.065049] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:53.870 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:10:54.129 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:10:54.129 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:10:54.129 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:10:54.129 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:10:54.130 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:10:54.130 [2024-11-06 13:54:33.387004] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:10:54.130 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:10:54.130 { 00:10:54.130 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:54.130 "listen_address": { 00:10:54.130 "trtype": "tcp", 00:10:54.130 "traddr": "", 00:10:54.130 "trsvcid": "4421" 00:10:54.130 }, 00:10:54.130 "method": "nvmf_subsystem_remove_listener", 00:10:54.130 "req_id": 1 00:10:54.130 } 00:10:54.130 Got JSON-RPC error response 00:10:54.130 response: 00:10:54.130 { 00:10:54.130 "code": -32602, 00:10:54.130 "message": "Invalid parameters" 00:10:54.130 }' 00:10:54.130 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:10:54.130 { 00:10:54.130 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:54.130 "listen_address": { 00:10:54.130 "trtype": "tcp", 00:10:54.130 "traddr": "", 00:10:54.130 "trsvcid": "4421" 00:10:54.130 }, 00:10:54.130 "method": "nvmf_subsystem_remove_listener", 00:10:54.130 "req_id": 1 00:10:54.130 } 00:10:54.130 Got JSON-RPC error response 00:10:54.130 response: 00:10:54.130 { 00:10:54.130 "code": -32602, 00:10:54.130 "message": "Invalid parameters" 00:10:54.130 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:10:54.130 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14203 -i 0 00:10:54.389 [2024-11-06 13:54:33.551489] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14203: invalid cntlid range [0-65519] 00:10:54.389 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:10:54.389 { 00:10:54.389 "nqn": "nqn.2016-06.io.spdk:cnode14203", 00:10:54.389 "min_cntlid": 0, 00:10:54.389 "method": "nvmf_create_subsystem", 00:10:54.389 "req_id": 1 00:10:54.389 } 00:10:54.389 Got JSON-RPC error response 00:10:54.389 response: 00:10:54.389 { 00:10:54.389 "code": -32602, 00:10:54.389 "message": "Invalid cntlid range [0-65519]" 00:10:54.389 }' 00:10:54.389 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:10:54.389 { 00:10:54.389 "nqn": "nqn.2016-06.io.spdk:cnode14203", 00:10:54.389 "min_cntlid": 0, 00:10:54.389 "method": "nvmf_create_subsystem", 00:10:54.389 "req_id": 1 00:10:54.389 } 00:10:54.389 Got JSON-RPC error response 00:10:54.389 response: 00:10:54.389 { 00:10:54.389 "code": -32602, 00:10:54.389 "message": "Invalid cntlid range [0-65519]" 00:10:54.389 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:54.389 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1546 -i 65520 00:10:54.649 [2024-11-06 13:54:33.711979] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1546: invalid cntlid range [65520-65519] 00:10:54.649 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:10:54.649 { 00:10:54.649 "nqn": "nqn.2016-06.io.spdk:cnode1546", 00:10:54.649 "min_cntlid": 65520, 00:10:54.649 "method": "nvmf_create_subsystem", 00:10:54.649 "req_id": 1 00:10:54.649 } 00:10:54.649 Got JSON-RPC error response 00:10:54.649 response: 00:10:54.649 { 00:10:54.649 "code": -32602, 00:10:54.649 "message": "Invalid cntlid range [65520-65519]" 00:10:54.649 }' 00:10:54.649 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:10:54.649 { 00:10:54.649 "nqn": "nqn.2016-06.io.spdk:cnode1546", 00:10:54.649 "min_cntlid": 65520, 00:10:54.649 "method": "nvmf_create_subsystem", 00:10:54.649 "req_id": 1 00:10:54.649 } 00:10:54.649 Got JSON-RPC error response 00:10:54.649 response: 00:10:54.649 { 00:10:54.649 "code": -32602, 00:10:54.649 "message": "Invalid cntlid range [65520-65519]" 00:10:54.649 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:54.649 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode118 -I 0 00:10:54.649 [2024-11-06 13:54:33.872483] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode118: invalid cntlid range [1-0] 00:10:54.649 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:10:54.649 { 00:10:54.649 "nqn": "nqn.2016-06.io.spdk:cnode118", 00:10:54.649 "max_cntlid": 0, 00:10:54.649 "method": "nvmf_create_subsystem", 00:10:54.649 "req_id": 1 00:10:54.649 } 00:10:54.649 Got JSON-RPC error response 00:10:54.649 response: 00:10:54.649 { 00:10:54.649 "code": -32602, 00:10:54.649 "message": "Invalid cntlid range [1-0]" 00:10:54.649 }' 00:10:54.649 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:10:54.649 { 00:10:54.649 "nqn": "nqn.2016-06.io.spdk:cnode118", 00:10:54.649 "max_cntlid": 0, 00:10:54.649 "method": "nvmf_create_subsystem", 00:10:54.649 "req_id": 1 00:10:54.649 } 00:10:54.649 Got JSON-RPC error response 00:10:54.649 response: 00:10:54.649 { 00:10:54.649 "code": -32602, 00:10:54.649 "message": "Invalid cntlid range [1-0]" 00:10:54.649 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:54.649 13:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11525 -I 65520 00:10:54.909 [2024-11-06 13:54:34.032986] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11525: invalid cntlid range [1-65520] 00:10:54.909 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:10:54.909 { 00:10:54.909 "nqn": "nqn.2016-06.io.spdk:cnode11525", 00:10:54.909 "max_cntlid": 65520, 00:10:54.909 "method": "nvmf_create_subsystem", 00:10:54.909 "req_id": 1 00:10:54.909 } 00:10:54.909 Got JSON-RPC error response 00:10:54.909 response: 00:10:54.909 { 00:10:54.909 "code": -32602, 00:10:54.909 "message": "Invalid cntlid range [1-65520]" 00:10:54.909 }' 00:10:54.909 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:10:54.909 { 00:10:54.909 "nqn": "nqn.2016-06.io.spdk:cnode11525", 00:10:54.909 "max_cntlid": 65520, 00:10:54.909 "method": "nvmf_create_subsystem", 00:10:54.909 "req_id": 1 00:10:54.909 } 00:10:54.909 Got JSON-RPC error response 00:10:54.909 response: 00:10:54.909 { 00:10:54.909 "code": -32602, 00:10:54.909 "message": "Invalid cntlid range [1-65520]" 00:10:54.909 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:54.909 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27129 -i 6 -I 5 00:10:55.169 [2024-11-06 13:54:34.197501] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27129: invalid cntlid range [6-5] 00:10:55.169 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:10:55.169 { 00:10:55.169 "nqn": "nqn.2016-06.io.spdk:cnode27129", 00:10:55.169 "min_cntlid": 6, 00:10:55.169 "max_cntlid": 5, 00:10:55.169 "method": "nvmf_create_subsystem", 00:10:55.169 "req_id": 1 00:10:55.169 } 00:10:55.169 Got JSON-RPC error response 00:10:55.169 response: 00:10:55.169 { 00:10:55.169 "code": -32602, 00:10:55.169 "message": "Invalid cntlid range [6-5]" 00:10:55.169 }' 00:10:55.169 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:10:55.169 { 00:10:55.169 "nqn": "nqn.2016-06.io.spdk:cnode27129", 00:10:55.169 "min_cntlid": 6, 00:10:55.169 "max_cntlid": 5, 00:10:55.169 "method": "nvmf_create_subsystem", 00:10:55.169 "req_id": 1 00:10:55.169 } 00:10:55.169 Got JSON-RPC error response 00:10:55.169 response: 00:10:55.169 { 00:10:55.169 "code": -32602, 00:10:55.169 "message": "Invalid cntlid range [6-5]" 00:10:55.169 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:55.169 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:10:55.169 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:10:55.169 { 00:10:55.169 "name": "foobar", 00:10:55.169 "method": "nvmf_delete_target", 00:10:55.169 "req_id": 1 00:10:55.169 } 00:10:55.169 Got JSON-RPC error response 00:10:55.169 response: 00:10:55.169 { 00:10:55.169 "code": -32602, 00:10:55.169 "message": "The specified target doesn'\''t exist, cannot delete it." 00:10:55.169 }' 00:10:55.169 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:10:55.169 { 00:10:55.169 "name": "foobar", 00:10:55.169 "method": "nvmf_delete_target", 00:10:55.169 "req_id": 1 00:10:55.169 } 00:10:55.169 Got JSON-RPC error response 00:10:55.169 response: 00:10:55.169 { 00:10:55.169 "code": -32602, 00:10:55.169 "message": "The specified target doesn't exist, cannot delete it." 00:10:55.169 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:10:55.169 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:10:55.169 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:10:55.169 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:55.169 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:10:55.169 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:55.169 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:10:55.169 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:55.169 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:55.169 rmmod nvme_tcp 00:10:55.169 rmmod nvme_fabrics 00:10:55.169 rmmod nvme_keyring 00:10:55.169 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:55.169 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:10:55.169 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:10:55.169 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 771266 ']' 00:10:55.169 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 771266 00:10:55.169 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' -z 771266 ']' 00:10:55.169 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # kill -0 771266 00:10:55.169 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # uname 00:10:55.169 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:55.169 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 771266 00:10:55.169 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:55.169 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:55.169 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 771266' 00:10:55.169 killing process with pid 771266 00:10:55.169 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@971 -- # kill 771266 00:10:55.169 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@976 -- # wait 771266 00:10:55.428 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:55.428 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:55.428 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:55.428 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:10:55.428 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:55.428 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:10:55.428 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:10:55.429 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:55.429 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:55.429 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.429 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.429 13:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.334 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:57.334 00:10:57.334 real 0m11.073s 00:10:57.334 user 0m17.088s 00:10:57.334 sys 0m4.690s 00:10:57.334 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:57.334 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:57.334 ************************************ 00:10:57.334 END TEST nvmf_invalid 00:10:57.334 ************************************ 00:10:57.334 13:54:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:57.334 13:54:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:57.334 13:54:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:57.334 13:54:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:57.593 ************************************ 00:10:57.593 START TEST nvmf_connect_stress 00:10:57.593 ************************************ 00:10:57.593 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:57.593 * Looking for test storage... 00:10:57.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.593 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:57.593 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:10:57.593 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:57.593 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:57.593 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:57.593 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:57.593 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:57.593 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:10:57.593 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:10:57.593 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:10:57.593 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:10:57.593 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:10:57.593 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:10:57.593 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:10:57.593 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:57.593 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:10:57.593 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:10:57.593 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:57.593 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:57.593 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:10:57.593 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:10:57.593 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:57.593 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:10:57.593 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:10:57.593 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:10:57.593 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:10:57.593 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:57.593 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:10:57.593 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:10:57.593 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:57.593 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:57.593 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:10:57.593 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:57.593 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:57.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.593 --rc genhtml_branch_coverage=1 00:10:57.594 --rc genhtml_function_coverage=1 00:10:57.594 --rc genhtml_legend=1 00:10:57.594 --rc geninfo_all_blocks=1 00:10:57.594 --rc geninfo_unexecuted_blocks=1 00:10:57.594 00:10:57.594 ' 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:57.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.594 --rc genhtml_branch_coverage=1 00:10:57.594 --rc genhtml_function_coverage=1 00:10:57.594 --rc genhtml_legend=1 00:10:57.594 --rc geninfo_all_blocks=1 00:10:57.594 --rc geninfo_unexecuted_blocks=1 00:10:57.594 00:10:57.594 ' 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:57.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.594 --rc genhtml_branch_coverage=1 00:10:57.594 --rc genhtml_function_coverage=1 00:10:57.594 --rc genhtml_legend=1 00:10:57.594 --rc geninfo_all_blocks=1 00:10:57.594 --rc geninfo_unexecuted_blocks=1 00:10:57.594 00:10:57.594 ' 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:57.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.594 --rc genhtml_branch_coverage=1 00:10:57.594 --rc genhtml_function_coverage=1 00:10:57.594 --rc genhtml_legend=1 00:10:57.594 --rc geninfo_all_blocks=1 00:10:57.594 --rc geninfo_unexecuted_blocks=1 00:10:57.594 00:10:57.594 ' 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:57.594 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:10:57.594 13:54:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.871 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:02.871 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:11:02.871 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:02.871 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:02.871 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:02.871 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:02.871 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:02.871 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:11:02.871 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:02.871 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:11:02.871 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:11:02.871 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:02.872 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:02.872 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:02.872 Found net devices under 0000:31:00.0: cvl_0_0 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:02.872 Found net devices under 0000:31:00.1: cvl_0_1 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:02.872 13:54:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:02.872 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:02.872 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:02.872 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:02.872 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:03.133 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:03.133 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:03.133 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:03.133 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:03.133 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:03.133 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:11:03.133 00:11:03.133 --- 10.0.0.2 ping statistics --- 00:11:03.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.133 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:11:03.133 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:03.133 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:03.133 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:11:03.133 00:11:03.133 --- 10.0.0.1 ping statistics --- 00:11:03.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.133 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:11:03.133 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:03.133 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:11:03.133 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:03.133 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:03.133 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:03.133 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:03.133 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:03.133 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:03.133 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:03.133 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:03.133 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:03.133 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:03.133 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.133 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=776686 00:11:03.133 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 776686 00:11:03.133 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 776686 ']' 00:11:03.133 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.133 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:03.133 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.133 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:03.133 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:03.133 13:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.133 [2024-11-06 13:54:42.296747] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:11:03.133 [2024-11-06 13:54:42.296813] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.133 [2024-11-06 13:54:42.389634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:03.393 [2024-11-06 13:54:42.441443] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:03.393 [2024-11-06 13:54:42.441496] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:03.393 [2024-11-06 13:54:42.441505] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:03.393 [2024-11-06 13:54:42.441512] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:03.393 [2024-11-06 13:54:42.441522] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:03.393 [2024-11-06 13:54:42.443682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.393 [2024-11-06 13:54:42.443838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.393 [2024-11-06 13:54:42.443840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:03.963 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:03.963 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:11:03.963 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:03.963 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:03.963 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.963 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:03.963 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:03.963 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.963 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.963 [2024-11-06 13:54:43.115135] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:03.963 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.963 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:03.963 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.963 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.963 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.963 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:03.963 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.963 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.963 [2024-11-06 13:54:43.132285] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:03.963 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.963 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:03.963 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.963 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.963 NULL1 00:11:03.963 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.963 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=776809 00:11:03.963 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:03.963 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:03.963 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:03.963 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:03.963 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.963 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.963 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 776809 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.964 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.534 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.534 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 776809 00:11:04.534 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.534 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.534 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.793 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.793 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 776809 00:11:04.793 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.793 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.793 13:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.053 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.053 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 776809 00:11:05.053 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.053 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.053 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.312 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.312 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 776809 00:11:05.312 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.312 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.312 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.572 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.572 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 776809 00:11:05.572 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.572 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.572 13:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.141 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.141 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 776809 00:11:06.141 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.141 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.141 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.401 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.401 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 776809 00:11:06.401 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.401 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.401 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.661 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.661 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 776809 00:11:06.661 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.661 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.661 13:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.921 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.921 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 776809 00:11:06.921 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.921 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.921 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.181 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.181 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 776809 00:11:07.181 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.181 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.181 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.751 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.751 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 776809 00:11:07.751 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.751 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.751 13:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.011 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.011 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 776809 00:11:08.011 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.011 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.011 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.270 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.270 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 776809 00:11:08.270 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.270 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.270 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.530 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.531 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 776809 00:11:08.531 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.531 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.531 13:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.790 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.790 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 776809 00:11:08.790 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.790 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.790 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.359 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.359 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 776809 00:11:09.359 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.359 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.359 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.618 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.618 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 776809 00:11:09.618 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.618 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.618 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.877 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.877 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 776809 00:11:09.877 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.877 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.877 13:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.137 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.137 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 776809 00:11:10.137 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.137 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.137 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.396 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.396 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 776809 00:11:10.396 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.396 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.396 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.032 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.032 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 776809 00:11:11.032 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.032 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.032 13:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.296 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.296 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 776809 00:11:11.296 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.296 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.296 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.556 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.556 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 776809 00:11:11.556 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.556 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.556 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.817 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.817 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 776809 00:11:11.817 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.817 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.817 13:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.076 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.076 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 776809 00:11:12.076 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.076 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.076 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.335 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.335 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 776809 00:11:12.335 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.335 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.335 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.902 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.902 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 776809 00:11:12.902 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.902 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.902 13:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.159 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.159 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 776809 00:11:13.159 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.160 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.160 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.417 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.417 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 776809 00:11:13.417 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.417 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.417 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.676 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.676 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 776809 00:11:13.676 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.676 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.676 13:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.935 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.935 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 776809 00:11:13.935 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.935 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.935 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.195 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 776809 00:11:14.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (776809) - No such process 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 776809 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:14.455 rmmod nvme_tcp 00:11:14.455 rmmod nvme_fabrics 00:11:14.455 rmmod nvme_keyring 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 776686 ']' 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 776686 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 776686 ']' 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 776686 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 776686 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 776686' 00:11:14.455 killing process with pid 776686 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 776686 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 776686 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:14.455 13:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.992 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:16.992 00:11:16.992 real 0m19.116s 00:11:16.992 user 0m43.189s 00:11:16.992 sys 0m6.209s 00:11:16.992 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:16.992 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.992 ************************************ 00:11:16.992 END TEST nvmf_connect_stress 00:11:16.992 ************************************ 00:11:16.992 13:54:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:16.992 13:54:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:16.992 13:54:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:16.992 13:54:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:16.992 ************************************ 00:11:16.992 START TEST nvmf_fused_ordering 00:11:16.992 ************************************ 00:11:16.992 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:16.992 * Looking for test storage... 00:11:16.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.992 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:16.992 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:11:16.992 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:16.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.993 --rc genhtml_branch_coverage=1 00:11:16.993 --rc genhtml_function_coverage=1 00:11:16.993 --rc genhtml_legend=1 00:11:16.993 --rc geninfo_all_blocks=1 00:11:16.993 --rc geninfo_unexecuted_blocks=1 00:11:16.993 00:11:16.993 ' 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:16.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.993 --rc genhtml_branch_coverage=1 00:11:16.993 --rc genhtml_function_coverage=1 00:11:16.993 --rc genhtml_legend=1 00:11:16.993 --rc geninfo_all_blocks=1 00:11:16.993 --rc geninfo_unexecuted_blocks=1 00:11:16.993 00:11:16.993 ' 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:16.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.993 --rc genhtml_branch_coverage=1 00:11:16.993 --rc genhtml_function_coverage=1 00:11:16.993 --rc genhtml_legend=1 00:11:16.993 --rc geninfo_all_blocks=1 00:11:16.993 --rc geninfo_unexecuted_blocks=1 00:11:16.993 00:11:16.993 ' 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:16.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.993 --rc genhtml_branch_coverage=1 00:11:16.993 --rc genhtml_function_coverage=1 00:11:16.993 --rc genhtml_legend=1 00:11:16.993 --rc geninfo_all_blocks=1 00:11:16.993 --rc geninfo_unexecuted_blocks=1 00:11:16.993 00:11:16.993 ' 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:16.993 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:16.993 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:16.994 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:16.994 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:16.994 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:16.994 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.994 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.994 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.994 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:16.994 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:16.994 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:11:16.994 13:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:22.269 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:22.269 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:11:22.269 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:22.269 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:22.269 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:22.269 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:22.269 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:22.269 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:11:22.269 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:22.269 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:11:22.269 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:11:22.269 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:11:22.269 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:11:22.269 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:11:22.269 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:11:22.269 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:22.269 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:22.269 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:22.269 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:22.269 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:22.269 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:22.269 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:22.269 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:22.269 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:22.269 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:22.269 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:22.269 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:22.269 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:22.269 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:22.269 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:22.269 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:22.269 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:22.269 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:22.269 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:22.270 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:22.270 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:22.270 Found net devices under 0000:31:00.0: cvl_0_0 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:22.270 Found net devices under 0000:31:00.1: cvl_0_1 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:22.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:22.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.532 ms 00:11:22.270 00:11:22.270 --- 10.0.0.2 ping statistics --- 00:11:22.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.270 rtt min/avg/max/mdev = 0.532/0.532/0.532/0.000 ms 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:22.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:22.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:11:22.270 00:11:22.270 --- 10.0.0.1 ping statistics --- 00:11:22.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.270 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=783498 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 783498 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 783498 ']' 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:22.270 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:22.270 [2024-11-06 13:55:01.337547] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:11:22.270 [2024-11-06 13:55:01.337597] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.270 [2024-11-06 13:55:01.410733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.270 [2024-11-06 13:55:01.439473] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:22.270 [2024-11-06 13:55:01.439500] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:22.270 [2024-11-06 13:55:01.439505] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:22.271 [2024-11-06 13:55:01.439510] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:22.271 [2024-11-06 13:55:01.439514] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:22.271 [2024-11-06 13:55:01.439952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.271 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:22.271 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:11:22.271 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:22.271 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:22.271 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:22.271 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:22.271 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:22.271 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.271 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:22.271 [2024-11-06 13:55:01.539016] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:22.271 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.271 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:22.271 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.271 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:22.271 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.271 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:22.271 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.530 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:22.530 [2024-11-06 13:55:01.555200] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:22.530 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.530 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:22.531 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.531 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:22.531 NULL1 00:11:22.531 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.531 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:22.531 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.531 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:22.531 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.531 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:22.531 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.531 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:22.531 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.531 13:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:22.531 [2024-11-06 13:55:01.597291] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:11:22.531 [2024-11-06 13:55:01.597320] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid783525 ] 00:11:23.100 Attached to nqn.2016-06.io.spdk:cnode1 00:11:23.100 Namespace ID: 1 size: 1GB 00:11:23.100 fused_ordering(0) 00:11:23.100 fused_ordering(1) 00:11:23.100 fused_ordering(2) 00:11:23.100 fused_ordering(3) 00:11:23.100 fused_ordering(4) 00:11:23.100 fused_ordering(5) 00:11:23.100 fused_ordering(6) 00:11:23.100 fused_ordering(7) 00:11:23.100 fused_ordering(8) 00:11:23.100 fused_ordering(9) 00:11:23.100 fused_ordering(10) 00:11:23.100 fused_ordering(11) 00:11:23.100 fused_ordering(12) 00:11:23.100 fused_ordering(13) 00:11:23.100 fused_ordering(14) 00:11:23.100 fused_ordering(15) 00:11:23.100 fused_ordering(16) 00:11:23.100 fused_ordering(17) 00:11:23.100 fused_ordering(18) 00:11:23.100 fused_ordering(19) 00:11:23.100 fused_ordering(20) 00:11:23.100 fused_ordering(21) 00:11:23.100 fused_ordering(22) 00:11:23.100 fused_ordering(23) 00:11:23.100 fused_ordering(24) 00:11:23.100 fused_ordering(25) 00:11:23.100 fused_ordering(26) 00:11:23.100 fused_ordering(27) 00:11:23.100 fused_ordering(28) 00:11:23.100 fused_ordering(29) 00:11:23.100 fused_ordering(30) 00:11:23.100 fused_ordering(31) 00:11:23.100 fused_ordering(32) 00:11:23.100 fused_ordering(33) 00:11:23.100 fused_ordering(34) 00:11:23.100 fused_ordering(35) 00:11:23.100 fused_ordering(36) 00:11:23.100 fused_ordering(37) 00:11:23.100 fused_ordering(38) 00:11:23.100 fused_ordering(39) 00:11:23.100 fused_ordering(40) 00:11:23.100 fused_ordering(41) 00:11:23.100 fused_ordering(42) 00:11:23.100 fused_ordering(43) 00:11:23.100 fused_ordering(44) 00:11:23.100 fused_ordering(45) 00:11:23.100 fused_ordering(46) 00:11:23.100 fused_ordering(47) 00:11:23.100 fused_ordering(48) 00:11:23.100 fused_ordering(49) 00:11:23.100 fused_ordering(50) 00:11:23.100 fused_ordering(51) 00:11:23.100 fused_ordering(52) 00:11:23.100 fused_ordering(53) 00:11:23.100 fused_ordering(54) 00:11:23.100 fused_ordering(55) 00:11:23.100 fused_ordering(56) 00:11:23.100 fused_ordering(57) 00:11:23.100 fused_ordering(58) 00:11:23.100 fused_ordering(59) 00:11:23.100 fused_ordering(60) 00:11:23.100 fused_ordering(61) 00:11:23.100 fused_ordering(62) 00:11:23.100 fused_ordering(63) 00:11:23.100 fused_ordering(64) 00:11:23.100 fused_ordering(65) 00:11:23.100 fused_ordering(66) 00:11:23.100 fused_ordering(67) 00:11:23.100 fused_ordering(68) 00:11:23.100 fused_ordering(69) 00:11:23.100 fused_ordering(70) 00:11:23.100 fused_ordering(71) 00:11:23.100 fused_ordering(72) 00:11:23.100 fused_ordering(73) 00:11:23.100 fused_ordering(74) 00:11:23.100 fused_ordering(75) 00:11:23.100 fused_ordering(76) 00:11:23.100 fused_ordering(77) 00:11:23.100 fused_ordering(78) 00:11:23.100 fused_ordering(79) 00:11:23.100 fused_ordering(80) 00:11:23.100 fused_ordering(81) 00:11:23.100 fused_ordering(82) 00:11:23.100 fused_ordering(83) 00:11:23.100 fused_ordering(84) 00:11:23.100 fused_ordering(85) 00:11:23.100 fused_ordering(86) 00:11:23.100 fused_ordering(87) 00:11:23.100 fused_ordering(88) 00:11:23.100 fused_ordering(89) 00:11:23.100 fused_ordering(90) 00:11:23.100 fused_ordering(91) 00:11:23.100 fused_ordering(92) 00:11:23.100 fused_ordering(93) 00:11:23.100 fused_ordering(94) 00:11:23.100 fused_ordering(95) 00:11:23.100 fused_ordering(96) 00:11:23.100 fused_ordering(97) 00:11:23.100 fused_ordering(98) 00:11:23.100 fused_ordering(99) 00:11:23.100 fused_ordering(100) 00:11:23.100 fused_ordering(101) 00:11:23.100 fused_ordering(102) 00:11:23.100 fused_ordering(103) 00:11:23.100 fused_ordering(104) 00:11:23.100 fused_ordering(105) 00:11:23.100 fused_ordering(106) 00:11:23.100 fused_ordering(107) 00:11:23.100 fused_ordering(108) 00:11:23.100 fused_ordering(109) 00:11:23.100 fused_ordering(110) 00:11:23.100 fused_ordering(111) 00:11:23.100 fused_ordering(112) 00:11:23.100 fused_ordering(113) 00:11:23.100 fused_ordering(114) 00:11:23.100 fused_ordering(115) 00:11:23.100 fused_ordering(116) 00:11:23.100 fused_ordering(117) 00:11:23.100 fused_ordering(118) 00:11:23.100 fused_ordering(119) 00:11:23.100 fused_ordering(120) 00:11:23.100 fused_ordering(121) 00:11:23.100 fused_ordering(122) 00:11:23.100 fused_ordering(123) 00:11:23.100 fused_ordering(124) 00:11:23.100 fused_ordering(125) 00:11:23.100 fused_ordering(126) 00:11:23.100 fused_ordering(127) 00:11:23.100 fused_ordering(128) 00:11:23.100 fused_ordering(129) 00:11:23.100 fused_ordering(130) 00:11:23.100 fused_ordering(131) 00:11:23.100 fused_ordering(132) 00:11:23.100 fused_ordering(133) 00:11:23.100 fused_ordering(134) 00:11:23.100 fused_ordering(135) 00:11:23.100 fused_ordering(136) 00:11:23.100 fused_ordering(137) 00:11:23.100 fused_ordering(138) 00:11:23.100 fused_ordering(139) 00:11:23.100 fused_ordering(140) 00:11:23.100 fused_ordering(141) 00:11:23.100 fused_ordering(142) 00:11:23.100 fused_ordering(143) 00:11:23.100 fused_ordering(144) 00:11:23.100 fused_ordering(145) 00:11:23.100 fused_ordering(146) 00:11:23.100 fused_ordering(147) 00:11:23.100 fused_ordering(148) 00:11:23.100 fused_ordering(149) 00:11:23.100 fused_ordering(150) 00:11:23.100 fused_ordering(151) 00:11:23.100 fused_ordering(152) 00:11:23.100 fused_ordering(153) 00:11:23.100 fused_ordering(154) 00:11:23.100 fused_ordering(155) 00:11:23.100 fused_ordering(156) 00:11:23.100 fused_ordering(157) 00:11:23.100 fused_ordering(158) 00:11:23.100 fused_ordering(159) 00:11:23.100 fused_ordering(160) 00:11:23.100 fused_ordering(161) 00:11:23.100 fused_ordering(162) 00:11:23.100 fused_ordering(163) 00:11:23.100 fused_ordering(164) 00:11:23.100 fused_ordering(165) 00:11:23.100 fused_ordering(166) 00:11:23.100 fused_ordering(167) 00:11:23.100 fused_ordering(168) 00:11:23.100 fused_ordering(169) 00:11:23.100 fused_ordering(170) 00:11:23.100 fused_ordering(171) 00:11:23.100 fused_ordering(172) 00:11:23.100 fused_ordering(173) 00:11:23.100 fused_ordering(174) 00:11:23.100 fused_ordering(175) 00:11:23.100 fused_ordering(176) 00:11:23.100 fused_ordering(177) 00:11:23.100 fused_ordering(178) 00:11:23.100 fused_ordering(179) 00:11:23.100 fused_ordering(180) 00:11:23.100 fused_ordering(181) 00:11:23.100 fused_ordering(182) 00:11:23.100 fused_ordering(183) 00:11:23.100 fused_ordering(184) 00:11:23.100 fused_ordering(185) 00:11:23.100 fused_ordering(186) 00:11:23.100 fused_ordering(187) 00:11:23.100 fused_ordering(188) 00:11:23.100 fused_ordering(189) 00:11:23.100 fused_ordering(190) 00:11:23.100 fused_ordering(191) 00:11:23.100 fused_ordering(192) 00:11:23.100 fused_ordering(193) 00:11:23.100 fused_ordering(194) 00:11:23.100 fused_ordering(195) 00:11:23.100 fused_ordering(196) 00:11:23.100 fused_ordering(197) 00:11:23.100 fused_ordering(198) 00:11:23.100 fused_ordering(199) 00:11:23.100 fused_ordering(200) 00:11:23.100 fused_ordering(201) 00:11:23.100 fused_ordering(202) 00:11:23.100 fused_ordering(203) 00:11:23.100 fused_ordering(204) 00:11:23.100 fused_ordering(205) 00:11:23.360 fused_ordering(206) 00:11:23.360 fused_ordering(207) 00:11:23.360 fused_ordering(208) 00:11:23.360 fused_ordering(209) 00:11:23.360 fused_ordering(210) 00:11:23.360 fused_ordering(211) 00:11:23.360 fused_ordering(212) 00:11:23.360 fused_ordering(213) 00:11:23.360 fused_ordering(214) 00:11:23.360 fused_ordering(215) 00:11:23.360 fused_ordering(216) 00:11:23.360 fused_ordering(217) 00:11:23.360 fused_ordering(218) 00:11:23.360 fused_ordering(219) 00:11:23.360 fused_ordering(220) 00:11:23.360 fused_ordering(221) 00:11:23.360 fused_ordering(222) 00:11:23.360 fused_ordering(223) 00:11:23.360 fused_ordering(224) 00:11:23.360 fused_ordering(225) 00:11:23.360 fused_ordering(226) 00:11:23.360 fused_ordering(227) 00:11:23.360 fused_ordering(228) 00:11:23.360 fused_ordering(229) 00:11:23.360 fused_ordering(230) 00:11:23.360 fused_ordering(231) 00:11:23.360 fused_ordering(232) 00:11:23.360 fused_ordering(233) 00:11:23.360 fused_ordering(234) 00:11:23.360 fused_ordering(235) 00:11:23.360 fused_ordering(236) 00:11:23.360 fused_ordering(237) 00:11:23.360 fused_ordering(238) 00:11:23.360 fused_ordering(239) 00:11:23.360 fused_ordering(240) 00:11:23.360 fused_ordering(241) 00:11:23.360 fused_ordering(242) 00:11:23.360 fused_ordering(243) 00:11:23.360 fused_ordering(244) 00:11:23.360 fused_ordering(245) 00:11:23.360 fused_ordering(246) 00:11:23.360 fused_ordering(247) 00:11:23.360 fused_ordering(248) 00:11:23.360 fused_ordering(249) 00:11:23.360 fused_ordering(250) 00:11:23.360 fused_ordering(251) 00:11:23.360 fused_ordering(252) 00:11:23.360 fused_ordering(253) 00:11:23.360 fused_ordering(254) 00:11:23.360 fused_ordering(255) 00:11:23.360 fused_ordering(256) 00:11:23.360 fused_ordering(257) 00:11:23.360 fused_ordering(258) 00:11:23.360 fused_ordering(259) 00:11:23.360 fused_ordering(260) 00:11:23.360 fused_ordering(261) 00:11:23.360 fused_ordering(262) 00:11:23.360 fused_ordering(263) 00:11:23.360 fused_ordering(264) 00:11:23.360 fused_ordering(265) 00:11:23.360 fused_ordering(266) 00:11:23.360 fused_ordering(267) 00:11:23.360 fused_ordering(268) 00:11:23.360 fused_ordering(269) 00:11:23.360 fused_ordering(270) 00:11:23.360 fused_ordering(271) 00:11:23.360 fused_ordering(272) 00:11:23.360 fused_ordering(273) 00:11:23.360 fused_ordering(274) 00:11:23.360 fused_ordering(275) 00:11:23.360 fused_ordering(276) 00:11:23.360 fused_ordering(277) 00:11:23.360 fused_ordering(278) 00:11:23.360 fused_ordering(279) 00:11:23.360 fused_ordering(280) 00:11:23.360 fused_ordering(281) 00:11:23.360 fused_ordering(282) 00:11:23.360 fused_ordering(283) 00:11:23.360 fused_ordering(284) 00:11:23.360 fused_ordering(285) 00:11:23.360 fused_ordering(286) 00:11:23.360 fused_ordering(287) 00:11:23.360 fused_ordering(288) 00:11:23.360 fused_ordering(289) 00:11:23.360 fused_ordering(290) 00:11:23.360 fused_ordering(291) 00:11:23.360 fused_ordering(292) 00:11:23.360 fused_ordering(293) 00:11:23.360 fused_ordering(294) 00:11:23.360 fused_ordering(295) 00:11:23.360 fused_ordering(296) 00:11:23.360 fused_ordering(297) 00:11:23.360 fused_ordering(298) 00:11:23.360 fused_ordering(299) 00:11:23.360 fused_ordering(300) 00:11:23.360 fused_ordering(301) 00:11:23.360 fused_ordering(302) 00:11:23.360 fused_ordering(303) 00:11:23.360 fused_ordering(304) 00:11:23.360 fused_ordering(305) 00:11:23.360 fused_ordering(306) 00:11:23.360 fused_ordering(307) 00:11:23.360 fused_ordering(308) 00:11:23.360 fused_ordering(309) 00:11:23.360 fused_ordering(310) 00:11:23.360 fused_ordering(311) 00:11:23.360 fused_ordering(312) 00:11:23.360 fused_ordering(313) 00:11:23.360 fused_ordering(314) 00:11:23.360 fused_ordering(315) 00:11:23.360 fused_ordering(316) 00:11:23.360 fused_ordering(317) 00:11:23.360 fused_ordering(318) 00:11:23.360 fused_ordering(319) 00:11:23.360 fused_ordering(320) 00:11:23.360 fused_ordering(321) 00:11:23.360 fused_ordering(322) 00:11:23.360 fused_ordering(323) 00:11:23.360 fused_ordering(324) 00:11:23.360 fused_ordering(325) 00:11:23.360 fused_ordering(326) 00:11:23.360 fused_ordering(327) 00:11:23.360 fused_ordering(328) 00:11:23.360 fused_ordering(329) 00:11:23.360 fused_ordering(330) 00:11:23.360 fused_ordering(331) 00:11:23.360 fused_ordering(332) 00:11:23.360 fused_ordering(333) 00:11:23.360 fused_ordering(334) 00:11:23.360 fused_ordering(335) 00:11:23.360 fused_ordering(336) 00:11:23.360 fused_ordering(337) 00:11:23.360 fused_ordering(338) 00:11:23.360 fused_ordering(339) 00:11:23.360 fused_ordering(340) 00:11:23.360 fused_ordering(341) 00:11:23.360 fused_ordering(342) 00:11:23.360 fused_ordering(343) 00:11:23.360 fused_ordering(344) 00:11:23.360 fused_ordering(345) 00:11:23.360 fused_ordering(346) 00:11:23.360 fused_ordering(347) 00:11:23.360 fused_ordering(348) 00:11:23.360 fused_ordering(349) 00:11:23.360 fused_ordering(350) 00:11:23.360 fused_ordering(351) 00:11:23.360 fused_ordering(352) 00:11:23.360 fused_ordering(353) 00:11:23.360 fused_ordering(354) 00:11:23.360 fused_ordering(355) 00:11:23.360 fused_ordering(356) 00:11:23.360 fused_ordering(357) 00:11:23.360 fused_ordering(358) 00:11:23.360 fused_ordering(359) 00:11:23.360 fused_ordering(360) 00:11:23.360 fused_ordering(361) 00:11:23.360 fused_ordering(362) 00:11:23.360 fused_ordering(363) 00:11:23.360 fused_ordering(364) 00:11:23.360 fused_ordering(365) 00:11:23.360 fused_ordering(366) 00:11:23.360 fused_ordering(367) 00:11:23.360 fused_ordering(368) 00:11:23.360 fused_ordering(369) 00:11:23.360 fused_ordering(370) 00:11:23.360 fused_ordering(371) 00:11:23.360 fused_ordering(372) 00:11:23.360 fused_ordering(373) 00:11:23.360 fused_ordering(374) 00:11:23.360 fused_ordering(375) 00:11:23.360 fused_ordering(376) 00:11:23.360 fused_ordering(377) 00:11:23.360 fused_ordering(378) 00:11:23.360 fused_ordering(379) 00:11:23.360 fused_ordering(380) 00:11:23.360 fused_ordering(381) 00:11:23.360 fused_ordering(382) 00:11:23.360 fused_ordering(383) 00:11:23.360 fused_ordering(384) 00:11:23.360 fused_ordering(385) 00:11:23.361 fused_ordering(386) 00:11:23.361 fused_ordering(387) 00:11:23.361 fused_ordering(388) 00:11:23.361 fused_ordering(389) 00:11:23.361 fused_ordering(390) 00:11:23.361 fused_ordering(391) 00:11:23.361 fused_ordering(392) 00:11:23.361 fused_ordering(393) 00:11:23.361 fused_ordering(394) 00:11:23.361 fused_ordering(395) 00:11:23.361 fused_ordering(396) 00:11:23.361 fused_ordering(397) 00:11:23.361 fused_ordering(398) 00:11:23.361 fused_ordering(399) 00:11:23.361 fused_ordering(400) 00:11:23.361 fused_ordering(401) 00:11:23.361 fused_ordering(402) 00:11:23.361 fused_ordering(403) 00:11:23.361 fused_ordering(404) 00:11:23.361 fused_ordering(405) 00:11:23.361 fused_ordering(406) 00:11:23.361 fused_ordering(407) 00:11:23.361 fused_ordering(408) 00:11:23.361 fused_ordering(409) 00:11:23.361 fused_ordering(410) 00:11:23.930 fused_ordering(411) 00:11:23.930 fused_ordering(412) 00:11:23.930 fused_ordering(413) 00:11:23.930 fused_ordering(414) 00:11:23.930 fused_ordering(415) 00:11:23.930 fused_ordering(416) 00:11:23.930 fused_ordering(417) 00:11:23.930 fused_ordering(418) 00:11:23.930 fused_ordering(419) 00:11:23.930 fused_ordering(420) 00:11:23.930 fused_ordering(421) 00:11:23.930 fused_ordering(422) 00:11:23.930 fused_ordering(423) 00:11:23.930 fused_ordering(424) 00:11:23.930 fused_ordering(425) 00:11:23.930 fused_ordering(426) 00:11:23.930 fused_ordering(427) 00:11:23.930 fused_ordering(428) 00:11:23.930 fused_ordering(429) 00:11:23.930 fused_ordering(430) 00:11:23.930 fused_ordering(431) 00:11:23.930 fused_ordering(432) 00:11:23.930 fused_ordering(433) 00:11:23.930 fused_ordering(434) 00:11:23.930 fused_ordering(435) 00:11:23.930 fused_ordering(436) 00:11:23.930 fused_ordering(437) 00:11:23.930 fused_ordering(438) 00:11:23.930 fused_ordering(439) 00:11:23.930 fused_ordering(440) 00:11:23.930 fused_ordering(441) 00:11:23.930 fused_ordering(442) 00:11:23.930 fused_ordering(443) 00:11:23.930 fused_ordering(444) 00:11:23.930 fused_ordering(445) 00:11:23.930 fused_ordering(446) 00:11:23.930 fused_ordering(447) 00:11:23.930 fused_ordering(448) 00:11:23.930 fused_ordering(449) 00:11:23.930 fused_ordering(450) 00:11:23.930 fused_ordering(451) 00:11:23.930 fused_ordering(452) 00:11:23.930 fused_ordering(453) 00:11:23.930 fused_ordering(454) 00:11:23.930 fused_ordering(455) 00:11:23.930 fused_ordering(456) 00:11:23.930 fused_ordering(457) 00:11:23.930 fused_ordering(458) 00:11:23.930 fused_ordering(459) 00:11:23.930 fused_ordering(460) 00:11:23.930 fused_ordering(461) 00:11:23.930 fused_ordering(462) 00:11:23.930 fused_ordering(463) 00:11:23.930 fused_ordering(464) 00:11:23.930 fused_ordering(465) 00:11:23.930 fused_ordering(466) 00:11:23.930 fused_ordering(467) 00:11:23.930 fused_ordering(468) 00:11:23.930 fused_ordering(469) 00:11:23.930 fused_ordering(470) 00:11:23.930 fused_ordering(471) 00:11:23.930 fused_ordering(472) 00:11:23.930 fused_ordering(473) 00:11:23.930 fused_ordering(474) 00:11:23.930 fused_ordering(475) 00:11:23.930 fused_ordering(476) 00:11:23.930 fused_ordering(477) 00:11:23.930 fused_ordering(478) 00:11:23.931 fused_ordering(479) 00:11:23.931 fused_ordering(480) 00:11:23.931 fused_ordering(481) 00:11:23.931 fused_ordering(482) 00:11:23.931 fused_ordering(483) 00:11:23.931 fused_ordering(484) 00:11:23.931 fused_ordering(485) 00:11:23.931 fused_ordering(486) 00:11:23.931 fused_ordering(487) 00:11:23.931 fused_ordering(488) 00:11:23.931 fused_ordering(489) 00:11:23.931 fused_ordering(490) 00:11:23.931 fused_ordering(491) 00:11:23.931 fused_ordering(492) 00:11:23.931 fused_ordering(493) 00:11:23.931 fused_ordering(494) 00:11:23.931 fused_ordering(495) 00:11:23.931 fused_ordering(496) 00:11:23.931 fused_ordering(497) 00:11:23.931 fused_ordering(498) 00:11:23.931 fused_ordering(499) 00:11:23.931 fused_ordering(500) 00:11:23.931 fused_ordering(501) 00:11:23.931 fused_ordering(502) 00:11:23.931 fused_ordering(503) 00:11:23.931 fused_ordering(504) 00:11:23.931 fused_ordering(505) 00:11:23.931 fused_ordering(506) 00:11:23.931 fused_ordering(507) 00:11:23.931 fused_ordering(508) 00:11:23.931 fused_ordering(509) 00:11:23.931 fused_ordering(510) 00:11:23.931 fused_ordering(511) 00:11:23.931 fused_ordering(512) 00:11:23.931 fused_ordering(513) 00:11:23.931 fused_ordering(514) 00:11:23.931 fused_ordering(515) 00:11:23.931 fused_ordering(516) 00:11:23.931 fused_ordering(517) 00:11:23.931 fused_ordering(518) 00:11:23.931 fused_ordering(519) 00:11:23.931 fused_ordering(520) 00:11:23.931 fused_ordering(521) 00:11:23.931 fused_ordering(522) 00:11:23.931 fused_ordering(523) 00:11:23.931 fused_ordering(524) 00:11:23.931 fused_ordering(525) 00:11:23.931 fused_ordering(526) 00:11:23.931 fused_ordering(527) 00:11:23.931 fused_ordering(528) 00:11:23.931 fused_ordering(529) 00:11:23.931 fused_ordering(530) 00:11:23.931 fused_ordering(531) 00:11:23.931 fused_ordering(532) 00:11:23.931 fused_ordering(533) 00:11:23.931 fused_ordering(534) 00:11:23.931 fused_ordering(535) 00:11:23.931 fused_ordering(536) 00:11:23.931 fused_ordering(537) 00:11:23.931 fused_ordering(538) 00:11:23.931 fused_ordering(539) 00:11:23.931 fused_ordering(540) 00:11:23.931 fused_ordering(541) 00:11:23.931 fused_ordering(542) 00:11:23.931 fused_ordering(543) 00:11:23.931 fused_ordering(544) 00:11:23.931 fused_ordering(545) 00:11:23.931 fused_ordering(546) 00:11:23.931 fused_ordering(547) 00:11:23.931 fused_ordering(548) 00:11:23.931 fused_ordering(549) 00:11:23.931 fused_ordering(550) 00:11:23.931 fused_ordering(551) 00:11:23.931 fused_ordering(552) 00:11:23.931 fused_ordering(553) 00:11:23.931 fused_ordering(554) 00:11:23.931 fused_ordering(555) 00:11:23.931 fused_ordering(556) 00:11:23.931 fused_ordering(557) 00:11:23.931 fused_ordering(558) 00:11:23.931 fused_ordering(559) 00:11:23.931 fused_ordering(560) 00:11:23.931 fused_ordering(561) 00:11:23.931 fused_ordering(562) 00:11:23.931 fused_ordering(563) 00:11:23.931 fused_ordering(564) 00:11:23.931 fused_ordering(565) 00:11:23.931 fused_ordering(566) 00:11:23.931 fused_ordering(567) 00:11:23.931 fused_ordering(568) 00:11:23.931 fused_ordering(569) 00:11:23.931 fused_ordering(570) 00:11:23.931 fused_ordering(571) 00:11:23.931 fused_ordering(572) 00:11:23.931 fused_ordering(573) 00:11:23.931 fused_ordering(574) 00:11:23.931 fused_ordering(575) 00:11:23.931 fused_ordering(576) 00:11:23.931 fused_ordering(577) 00:11:23.931 fused_ordering(578) 00:11:23.931 fused_ordering(579) 00:11:23.931 fused_ordering(580) 00:11:23.931 fused_ordering(581) 00:11:23.931 fused_ordering(582) 00:11:23.931 fused_ordering(583) 00:11:23.931 fused_ordering(584) 00:11:23.931 fused_ordering(585) 00:11:23.931 fused_ordering(586) 00:11:23.931 fused_ordering(587) 00:11:23.931 fused_ordering(588) 00:11:23.931 fused_ordering(589) 00:11:23.931 fused_ordering(590) 00:11:23.931 fused_ordering(591) 00:11:23.931 fused_ordering(592) 00:11:23.931 fused_ordering(593) 00:11:23.931 fused_ordering(594) 00:11:23.931 fused_ordering(595) 00:11:23.931 fused_ordering(596) 00:11:23.931 fused_ordering(597) 00:11:23.931 fused_ordering(598) 00:11:23.931 fused_ordering(599) 00:11:23.931 fused_ordering(600) 00:11:23.931 fused_ordering(601) 00:11:23.931 fused_ordering(602) 00:11:23.931 fused_ordering(603) 00:11:23.931 fused_ordering(604) 00:11:23.931 fused_ordering(605) 00:11:23.931 fused_ordering(606) 00:11:23.931 fused_ordering(607) 00:11:23.931 fused_ordering(608) 00:11:23.931 fused_ordering(609) 00:11:23.931 fused_ordering(610) 00:11:23.931 fused_ordering(611) 00:11:23.931 fused_ordering(612) 00:11:23.931 fused_ordering(613) 00:11:23.931 fused_ordering(614) 00:11:23.931 fused_ordering(615) 00:11:24.192 fused_ordering(616) 00:11:24.192 fused_ordering(617) 00:11:24.192 fused_ordering(618) 00:11:24.192 fused_ordering(619) 00:11:24.192 fused_ordering(620) 00:11:24.192 fused_ordering(621) 00:11:24.192 fused_ordering(622) 00:11:24.192 fused_ordering(623) 00:11:24.192 fused_ordering(624) 00:11:24.192 fused_ordering(625) 00:11:24.192 fused_ordering(626) 00:11:24.192 fused_ordering(627) 00:11:24.192 fused_ordering(628) 00:11:24.192 fused_ordering(629) 00:11:24.192 fused_ordering(630) 00:11:24.192 fused_ordering(631) 00:11:24.192 fused_ordering(632) 00:11:24.192 fused_ordering(633) 00:11:24.192 fused_ordering(634) 00:11:24.192 fused_ordering(635) 00:11:24.192 fused_ordering(636) 00:11:24.192 fused_ordering(637) 00:11:24.192 fused_ordering(638) 00:11:24.192 fused_ordering(639) 00:11:24.192 fused_ordering(640) 00:11:24.192 fused_ordering(641) 00:11:24.192 fused_ordering(642) 00:11:24.192 fused_ordering(643) 00:11:24.192 fused_ordering(644) 00:11:24.192 fused_ordering(645) 00:11:24.192 fused_ordering(646) 00:11:24.192 fused_ordering(647) 00:11:24.192 fused_ordering(648) 00:11:24.192 fused_ordering(649) 00:11:24.192 fused_ordering(650) 00:11:24.192 fused_ordering(651) 00:11:24.192 fused_ordering(652) 00:11:24.192 fused_ordering(653) 00:11:24.192 fused_ordering(654) 00:11:24.192 fused_ordering(655) 00:11:24.192 fused_ordering(656) 00:11:24.192 fused_ordering(657) 00:11:24.192 fused_ordering(658) 00:11:24.192 fused_ordering(659) 00:11:24.192 fused_ordering(660) 00:11:24.192 fused_ordering(661) 00:11:24.192 fused_ordering(662) 00:11:24.192 fused_ordering(663) 00:11:24.192 fused_ordering(664) 00:11:24.192 fused_ordering(665) 00:11:24.192 fused_ordering(666) 00:11:24.192 fused_ordering(667) 00:11:24.192 fused_ordering(668) 00:11:24.192 fused_ordering(669) 00:11:24.192 fused_ordering(670) 00:11:24.192 fused_ordering(671) 00:11:24.192 fused_ordering(672) 00:11:24.192 fused_ordering(673) 00:11:24.192 fused_ordering(674) 00:11:24.192 fused_ordering(675) 00:11:24.192 fused_ordering(676) 00:11:24.192 fused_ordering(677) 00:11:24.192 fused_ordering(678) 00:11:24.192 fused_ordering(679) 00:11:24.192 fused_ordering(680) 00:11:24.192 fused_ordering(681) 00:11:24.192 fused_ordering(682) 00:11:24.192 fused_ordering(683) 00:11:24.193 fused_ordering(684) 00:11:24.193 fused_ordering(685) 00:11:24.193 fused_ordering(686) 00:11:24.193 fused_ordering(687) 00:11:24.193 fused_ordering(688) 00:11:24.193 fused_ordering(689) 00:11:24.193 fused_ordering(690) 00:11:24.193 fused_ordering(691) 00:11:24.193 fused_ordering(692) 00:11:24.193 fused_ordering(693) 00:11:24.193 fused_ordering(694) 00:11:24.193 fused_ordering(695) 00:11:24.193 fused_ordering(696) 00:11:24.193 fused_ordering(697) 00:11:24.193 fused_ordering(698) 00:11:24.193 fused_ordering(699) 00:11:24.193 fused_ordering(700) 00:11:24.193 fused_ordering(701) 00:11:24.193 fused_ordering(702) 00:11:24.193 fused_ordering(703) 00:11:24.193 fused_ordering(704) 00:11:24.193 fused_ordering(705) 00:11:24.193 fused_ordering(706) 00:11:24.193 fused_ordering(707) 00:11:24.193 fused_ordering(708) 00:11:24.193 fused_ordering(709) 00:11:24.193 fused_ordering(710) 00:11:24.193 fused_ordering(711) 00:11:24.193 fused_ordering(712) 00:11:24.193 fused_ordering(713) 00:11:24.193 fused_ordering(714) 00:11:24.193 fused_ordering(715) 00:11:24.193 fused_ordering(716) 00:11:24.193 fused_ordering(717) 00:11:24.193 fused_ordering(718) 00:11:24.193 fused_ordering(719) 00:11:24.193 fused_ordering(720) 00:11:24.193 fused_ordering(721) 00:11:24.193 fused_ordering(722) 00:11:24.193 fused_ordering(723) 00:11:24.193 fused_ordering(724) 00:11:24.193 fused_ordering(725) 00:11:24.193 fused_ordering(726) 00:11:24.193 fused_ordering(727) 00:11:24.193 fused_ordering(728) 00:11:24.193 fused_ordering(729) 00:11:24.193 fused_ordering(730) 00:11:24.193 fused_ordering(731) 00:11:24.193 fused_ordering(732) 00:11:24.193 fused_ordering(733) 00:11:24.193 fused_ordering(734) 00:11:24.193 fused_ordering(735) 00:11:24.193 fused_ordering(736) 00:11:24.193 fused_ordering(737) 00:11:24.193 fused_ordering(738) 00:11:24.193 fused_ordering(739) 00:11:24.193 fused_ordering(740) 00:11:24.193 fused_ordering(741) 00:11:24.193 fused_ordering(742) 00:11:24.193 fused_ordering(743) 00:11:24.193 fused_ordering(744) 00:11:24.193 fused_ordering(745) 00:11:24.193 fused_ordering(746) 00:11:24.193 fused_ordering(747) 00:11:24.193 fused_ordering(748) 00:11:24.193 fused_ordering(749) 00:11:24.193 fused_ordering(750) 00:11:24.193 fused_ordering(751) 00:11:24.193 fused_ordering(752) 00:11:24.193 fused_ordering(753) 00:11:24.193 fused_ordering(754) 00:11:24.193 fused_ordering(755) 00:11:24.193 fused_ordering(756) 00:11:24.193 fused_ordering(757) 00:11:24.193 fused_ordering(758) 00:11:24.193 fused_ordering(759) 00:11:24.193 fused_ordering(760) 00:11:24.193 fused_ordering(761) 00:11:24.193 fused_ordering(762) 00:11:24.193 fused_ordering(763) 00:11:24.193 fused_ordering(764) 00:11:24.193 fused_ordering(765) 00:11:24.193 fused_ordering(766) 00:11:24.193 fused_ordering(767) 00:11:24.193 fused_ordering(768) 00:11:24.193 fused_ordering(769) 00:11:24.193 fused_ordering(770) 00:11:24.193 fused_ordering(771) 00:11:24.193 fused_ordering(772) 00:11:24.193 fused_ordering(773) 00:11:24.193 fused_ordering(774) 00:11:24.193 fused_ordering(775) 00:11:24.193 fused_ordering(776) 00:11:24.193 fused_ordering(777) 00:11:24.193 fused_ordering(778) 00:11:24.193 fused_ordering(779) 00:11:24.193 fused_ordering(780) 00:11:24.193 fused_ordering(781) 00:11:24.193 fused_ordering(782) 00:11:24.193 fused_ordering(783) 00:11:24.193 fused_ordering(784) 00:11:24.193 fused_ordering(785) 00:11:24.193 fused_ordering(786) 00:11:24.193 fused_ordering(787) 00:11:24.193 fused_ordering(788) 00:11:24.193 fused_ordering(789) 00:11:24.193 fused_ordering(790) 00:11:24.193 fused_ordering(791) 00:11:24.193 fused_ordering(792) 00:11:24.193 fused_ordering(793) 00:11:24.193 fused_ordering(794) 00:11:24.193 fused_ordering(795) 00:11:24.193 fused_ordering(796) 00:11:24.193 fused_ordering(797) 00:11:24.193 fused_ordering(798) 00:11:24.193 fused_ordering(799) 00:11:24.193 fused_ordering(800) 00:11:24.193 fused_ordering(801) 00:11:24.193 fused_ordering(802) 00:11:24.193 fused_ordering(803) 00:11:24.193 fused_ordering(804) 00:11:24.193 fused_ordering(805) 00:11:24.193 fused_ordering(806) 00:11:24.193 fused_ordering(807) 00:11:24.193 fused_ordering(808) 00:11:24.193 fused_ordering(809) 00:11:24.193 fused_ordering(810) 00:11:24.193 fused_ordering(811) 00:11:24.193 fused_ordering(812) 00:11:24.193 fused_ordering(813) 00:11:24.193 fused_ordering(814) 00:11:24.193 fused_ordering(815) 00:11:24.193 fused_ordering(816) 00:11:24.193 fused_ordering(817) 00:11:24.193 fused_ordering(818) 00:11:24.193 fused_ordering(819) 00:11:24.193 fused_ordering(820) 00:11:24.764 fused_ordering(821) 00:11:24.764 fused_ordering(822) 00:11:24.764 fused_ordering(823) 00:11:24.764 fused_ordering(824) 00:11:24.764 fused_ordering(825) 00:11:24.764 fused_ordering(826) 00:11:24.764 fused_ordering(827) 00:11:24.764 fused_ordering(828) 00:11:24.764 fused_ordering(829) 00:11:24.764 fused_ordering(830) 00:11:24.764 fused_ordering(831) 00:11:24.764 fused_ordering(832) 00:11:24.764 fused_ordering(833) 00:11:24.764 fused_ordering(834) 00:11:24.764 fused_ordering(835) 00:11:24.764 fused_ordering(836) 00:11:24.764 fused_ordering(837) 00:11:24.764 fused_ordering(838) 00:11:24.764 fused_ordering(839) 00:11:24.764 fused_ordering(840) 00:11:24.764 fused_ordering(841) 00:11:24.764 fused_ordering(842) 00:11:24.764 fused_ordering(843) 00:11:24.764 fused_ordering(844) 00:11:24.764 fused_ordering(845) 00:11:24.764 fused_ordering(846) 00:11:24.764 fused_ordering(847) 00:11:24.764 fused_ordering(848) 00:11:24.764 fused_ordering(849) 00:11:24.764 fused_ordering(850) 00:11:24.764 fused_ordering(851) 00:11:24.764 fused_ordering(852) 00:11:24.764 fused_ordering(853) 00:11:24.764 fused_ordering(854) 00:11:24.764 fused_ordering(855) 00:11:24.764 fused_ordering(856) 00:11:24.764 fused_ordering(857) 00:11:24.764 fused_ordering(858) 00:11:24.764 fused_ordering(859) 00:11:24.764 fused_ordering(860) 00:11:24.764 fused_ordering(861) 00:11:24.764 fused_ordering(862) 00:11:24.764 fused_ordering(863) 00:11:24.764 fused_ordering(864) 00:11:24.764 fused_ordering(865) 00:11:24.764 fused_ordering(866) 00:11:24.764 fused_ordering(867) 00:11:24.764 fused_ordering(868) 00:11:24.764 fused_ordering(869) 00:11:24.764 fused_ordering(870) 00:11:24.764 fused_ordering(871) 00:11:24.764 fused_ordering(872) 00:11:24.764 fused_ordering(873) 00:11:24.764 fused_ordering(874) 00:11:24.764 fused_ordering(875) 00:11:24.764 fused_ordering(876) 00:11:24.764 fused_ordering(877) 00:11:24.764 fused_ordering(878) 00:11:24.764 fused_ordering(879) 00:11:24.764 fused_ordering(880) 00:11:24.764 fused_ordering(881) 00:11:24.764 fused_ordering(882) 00:11:24.764 fused_ordering(883) 00:11:24.764 fused_ordering(884) 00:11:24.764 fused_ordering(885) 00:11:24.764 fused_ordering(886) 00:11:24.764 fused_ordering(887) 00:11:24.764 fused_ordering(888) 00:11:24.764 fused_ordering(889) 00:11:24.764 fused_ordering(890) 00:11:24.764 fused_ordering(891) 00:11:24.764 fused_ordering(892) 00:11:24.764 fused_ordering(893) 00:11:24.764 fused_ordering(894) 00:11:24.764 fused_ordering(895) 00:11:24.764 fused_ordering(896) 00:11:24.764 fused_ordering(897) 00:11:24.764 fused_ordering(898) 00:11:24.764 fused_ordering(899) 00:11:24.764 fused_ordering(900) 00:11:24.764 fused_ordering(901) 00:11:24.764 fused_ordering(902) 00:11:24.764 fused_ordering(903) 00:11:24.764 fused_ordering(904) 00:11:24.764 fused_ordering(905) 00:11:24.764 fused_ordering(906) 00:11:24.764 fused_ordering(907) 00:11:24.764 fused_ordering(908) 00:11:24.764 fused_ordering(909) 00:11:24.764 fused_ordering(910) 00:11:24.764 fused_ordering(911) 00:11:24.764 fused_ordering(912) 00:11:24.764 fused_ordering(913) 00:11:24.764 fused_ordering(914) 00:11:24.764 fused_ordering(915) 00:11:24.764 fused_ordering(916) 00:11:24.764 fused_ordering(917) 00:11:24.764 fused_ordering(918) 00:11:24.764 fused_ordering(919) 00:11:24.764 fused_ordering(920) 00:11:24.764 fused_ordering(921) 00:11:24.764 fused_ordering(922) 00:11:24.764 fused_ordering(923) 00:11:24.764 fused_ordering(924) 00:11:24.764 fused_ordering(925) 00:11:24.764 fused_ordering(926) 00:11:24.764 fused_ordering(927) 00:11:24.764 fused_ordering(928) 00:11:24.764 fused_ordering(929) 00:11:24.764 fused_ordering(930) 00:11:24.764 fused_ordering(931) 00:11:24.764 fused_ordering(932) 00:11:24.764 fused_ordering(933) 00:11:24.764 fused_ordering(934) 00:11:24.764 fused_ordering(935) 00:11:24.764 fused_ordering(936) 00:11:24.764 fused_ordering(937) 00:11:24.764 fused_ordering(938) 00:11:24.764 fused_ordering(939) 00:11:24.764 fused_ordering(940) 00:11:24.764 fused_ordering(941) 00:11:24.764 fused_ordering(942) 00:11:24.764 fused_ordering(943) 00:11:24.764 fused_ordering(944) 00:11:24.764 fused_ordering(945) 00:11:24.764 fused_ordering(946) 00:11:24.764 fused_ordering(947) 00:11:24.764 fused_ordering(948) 00:11:24.764 fused_ordering(949) 00:11:24.764 fused_ordering(950) 00:11:24.764 fused_ordering(951) 00:11:24.764 fused_ordering(952) 00:11:24.764 fused_ordering(953) 00:11:24.764 fused_ordering(954) 00:11:24.764 fused_ordering(955) 00:11:24.764 fused_ordering(956) 00:11:24.764 fused_ordering(957) 00:11:24.764 fused_ordering(958) 00:11:24.764 fused_ordering(959) 00:11:24.764 fused_ordering(960) 00:11:24.764 fused_ordering(961) 00:11:24.764 fused_ordering(962) 00:11:24.764 fused_ordering(963) 00:11:24.764 fused_ordering(964) 00:11:24.764 fused_ordering(965) 00:11:24.764 fused_ordering(966) 00:11:24.764 fused_ordering(967) 00:11:24.764 fused_ordering(968) 00:11:24.764 fused_ordering(969) 00:11:24.764 fused_ordering(970) 00:11:24.764 fused_ordering(971) 00:11:24.764 fused_ordering(972) 00:11:24.764 fused_ordering(973) 00:11:24.764 fused_ordering(974) 00:11:24.764 fused_ordering(975) 00:11:24.764 fused_ordering(976) 00:11:24.764 fused_ordering(977) 00:11:24.764 fused_ordering(978) 00:11:24.764 fused_ordering(979) 00:11:24.764 fused_ordering(980) 00:11:24.764 fused_ordering(981) 00:11:24.764 fused_ordering(982) 00:11:24.764 fused_ordering(983) 00:11:24.764 fused_ordering(984) 00:11:24.764 fused_ordering(985) 00:11:24.764 fused_ordering(986) 00:11:24.764 fused_ordering(987) 00:11:24.764 fused_ordering(988) 00:11:24.764 fused_ordering(989) 00:11:24.764 fused_ordering(990) 00:11:24.764 fused_ordering(991) 00:11:24.764 fused_ordering(992) 00:11:24.764 fused_ordering(993) 00:11:24.764 fused_ordering(994) 00:11:24.764 fused_ordering(995) 00:11:24.764 fused_ordering(996) 00:11:24.764 fused_ordering(997) 00:11:24.764 fused_ordering(998) 00:11:24.764 fused_ordering(999) 00:11:24.764 fused_ordering(1000) 00:11:24.764 fused_ordering(1001) 00:11:24.764 fused_ordering(1002) 00:11:24.764 fused_ordering(1003) 00:11:24.764 fused_ordering(1004) 00:11:24.764 fused_ordering(1005) 00:11:24.764 fused_ordering(1006) 00:11:24.764 fused_ordering(1007) 00:11:24.764 fused_ordering(1008) 00:11:24.764 fused_ordering(1009) 00:11:24.764 fused_ordering(1010) 00:11:24.764 fused_ordering(1011) 00:11:24.764 fused_ordering(1012) 00:11:24.764 fused_ordering(1013) 00:11:24.764 fused_ordering(1014) 00:11:24.764 fused_ordering(1015) 00:11:24.764 fused_ordering(1016) 00:11:24.764 fused_ordering(1017) 00:11:24.764 fused_ordering(1018) 00:11:24.764 fused_ordering(1019) 00:11:24.764 fused_ordering(1020) 00:11:24.764 fused_ordering(1021) 00:11:24.764 fused_ordering(1022) 00:11:24.764 fused_ordering(1023) 00:11:24.764 13:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:24.765 13:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:24.765 13:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:24.765 13:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:11:24.765 13:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:24.765 13:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:11:24.765 13:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:24.765 13:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:24.765 rmmod nvme_tcp 00:11:24.765 rmmod nvme_fabrics 00:11:24.765 rmmod nvme_keyring 00:11:24.765 13:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:24.765 13:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:11:24.765 13:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:11:24.765 13:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 783498 ']' 00:11:24.765 13:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 783498 00:11:24.765 13:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 783498 ']' 00:11:24.765 13:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 783498 00:11:24.765 13:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:11:24.765 13:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:24.765 13:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 783498 00:11:24.765 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:11:24.765 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:11:24.765 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 783498' 00:11:24.765 killing process with pid 783498 00:11:24.765 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 783498 00:11:24.765 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 783498 00:11:25.025 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:25.025 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:25.025 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:25.025 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:11:25.025 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:11:25.025 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:11:25.025 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:25.025 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:25.025 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:25.025 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.025 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:25.025 13:55:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.930 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:26.930 00:11:26.930 real 0m10.376s 00:11:26.930 user 0m5.879s 00:11:26.930 sys 0m4.913s 00:11:26.930 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:26.930 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:26.930 ************************************ 00:11:26.930 END TEST nvmf_fused_ordering 00:11:26.930 ************************************ 00:11:26.930 13:55:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:26.930 13:55:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:26.930 13:55:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:26.930 13:55:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:26.930 ************************************ 00:11:26.930 START TEST nvmf_ns_masking 00:11:26.930 ************************************ 00:11:26.930 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:27.190 * Looking for test storage... 00:11:27.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:27.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.190 --rc genhtml_branch_coverage=1 00:11:27.190 --rc genhtml_function_coverage=1 00:11:27.190 --rc genhtml_legend=1 00:11:27.190 --rc geninfo_all_blocks=1 00:11:27.190 --rc geninfo_unexecuted_blocks=1 00:11:27.190 00:11:27.190 ' 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:27.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.190 --rc genhtml_branch_coverage=1 00:11:27.190 --rc genhtml_function_coverage=1 00:11:27.190 --rc genhtml_legend=1 00:11:27.190 --rc geninfo_all_blocks=1 00:11:27.190 --rc geninfo_unexecuted_blocks=1 00:11:27.190 00:11:27.190 ' 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:27.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.190 --rc genhtml_branch_coverage=1 00:11:27.190 --rc genhtml_function_coverage=1 00:11:27.190 --rc genhtml_legend=1 00:11:27.190 --rc geninfo_all_blocks=1 00:11:27.190 --rc geninfo_unexecuted_blocks=1 00:11:27.190 00:11:27.190 ' 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:27.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.190 --rc genhtml_branch_coverage=1 00:11:27.190 --rc genhtml_function_coverage=1 00:11:27.190 --rc genhtml_legend=1 00:11:27.190 --rc geninfo_all_blocks=1 00:11:27.190 --rc geninfo_unexecuted_blocks=1 00:11:27.190 00:11:27.190 ' 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.190 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:27.191 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=0fbe38ba-19a8-4bd5-b598-2cdaef4efb93 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=8a902e4f-38ff-4882-a660-4d5a0e37a081 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=d34565af-4b6c-49bf-b7a6-1b37eab10612 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:11:27.191 13:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:32.484 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:32.484 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:32.484 Found net devices under 0000:31:00.0: cvl_0_0 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:32.484 Found net devices under 0000:31:00.1: cvl_0_1 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:32.484 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:32.484 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:32.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:11:32.485 00:11:32.485 --- 10.0.0.2 ping statistics --- 00:11:32.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.485 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:11:32.485 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:32.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:32.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:11:32.485 00:11:32.485 --- 10.0.0.1 ping statistics --- 00:11:32.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.485 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:11:32.485 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:32.485 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:11:32.485 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:32.485 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:32.485 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:32.485 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:32.485 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:32.485 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:32.485 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:32.485 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:32.485 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:32.485 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:32.485 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:32.485 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=788525 00:11:32.485 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 788525 00:11:32.485 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 788525 ']' 00:11:32.485 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.485 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:32.485 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.485 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:32.485 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:32.485 13:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:32.745 [2024-11-06 13:55:11.799625] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:11:32.745 [2024-11-06 13:55:11.799687] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.745 [2024-11-06 13:55:11.891203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.745 [2024-11-06 13:55:11.941709] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:32.745 [2024-11-06 13:55:11.941762] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:32.745 [2024-11-06 13:55:11.941771] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:32.745 [2024-11-06 13:55:11.941779] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:32.745 [2024-11-06 13:55:11.941785] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:32.745 [2024-11-06 13:55:11.942583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.313 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:33.313 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:11:33.313 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:33.313 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:33.313 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:33.572 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:33.572 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:33.572 [2024-11-06 13:55:12.751021] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:33.572 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:33.572 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:33.572 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:33.831 Malloc1 00:11:33.831 13:55:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:33.831 Malloc2 00:11:34.090 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:34.090 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:34.350 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:34.350 [2024-11-06 13:55:13.623224] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:34.610 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:34.610 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d34565af-4b6c-49bf-b7a6-1b37eab10612 -a 10.0.0.2 -s 4420 -i 4 00:11:34.610 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:34.610 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:11:34.610 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:34.610 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:34.610 13:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:11:36.520 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:36.520 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:36.520 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:36.520 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:36.520 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:36.520 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:11:36.520 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:36.520 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:36.779 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:36.779 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:36.779 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:36.779 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:36.779 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:36.779 [ 0]:0x1 00:11:36.779 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:36.779 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:36.779 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0e0041bd50444759856dcf289ee9d915 00:11:36.779 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0e0041bd50444759856dcf289ee9d915 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:36.779 13:55:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:37.039 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:37.039 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:37.040 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:37.040 [ 0]:0x1 00:11:37.040 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:37.040 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:37.040 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0e0041bd50444759856dcf289ee9d915 00:11:37.040 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0e0041bd50444759856dcf289ee9d915 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:37.040 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:37.040 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:37.040 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:37.040 [ 1]:0x2 00:11:37.040 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:37.040 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:37.040 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=63cbdae8933b4f84a6f4822a3ae672e7 00:11:37.040 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 63cbdae8933b4f84a6f4822a3ae672e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:37.040 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:37.040 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:37.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.040 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:37.300 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:37.300 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:37.300 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d34565af-4b6c-49bf-b7a6-1b37eab10612 -a 10.0.0.2 -s 4420 -i 4 00:11:37.560 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:37.560 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:11:37.560 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:37.560 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:11:37.560 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:11:37.560 13:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:11:39.470 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:39.470 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:39.470 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:39.470 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:39.470 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:39.470 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:11:39.470 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:39.470 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:39.470 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:39.470 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:39.470 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:39.470 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:39.470 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:11:39.470 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:11:39.470 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:39.470 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:11:39.470 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:39.470 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:11:39.470 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:39.470 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:39.470 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:39.470 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:39.470 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:39.470 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:39.470 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:39.470 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:39.470 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:39.470 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:39.470 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:39.470 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:39.470 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:39.470 [ 0]:0x2 00:11:39.470 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:39.470 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:39.730 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=63cbdae8933b4f84a6f4822a3ae672e7 00:11:39.730 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 63cbdae8933b4f84a6f4822a3ae672e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:39.730 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:39.730 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:39.730 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:39.730 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:39.730 [ 0]:0x1 00:11:39.730 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:39.730 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:39.730 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0e0041bd50444759856dcf289ee9d915 00:11:39.730 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0e0041bd50444759856dcf289ee9d915 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:39.730 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:39.730 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:39.730 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:39.730 [ 1]:0x2 00:11:39.730 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:39.730 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:39.730 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=63cbdae8933b4f84a6f4822a3ae672e7 00:11:39.730 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 63cbdae8933b4f84a6f4822a3ae672e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:39.730 13:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:39.991 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:39.991 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:39.991 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:11:39.991 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:11:39.991 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:39.991 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:11:39.991 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:39.991 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:11:39.991 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:39.991 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:39.991 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:39.991 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:39.991 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:39.991 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:39.991 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:39.991 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:39.991 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:39.991 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:39.991 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:39.991 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:39.991 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:39.991 [ 0]:0x2 00:11:39.991 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:39.991 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:39.991 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=63cbdae8933b4f84a6f4822a3ae672e7 00:11:39.991 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 63cbdae8933b4f84a6f4822a3ae672e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:39.991 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:39.991 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:39.991 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.991 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:40.252 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:40.252 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d34565af-4b6c-49bf-b7a6-1b37eab10612 -a 10.0.0.2 -s 4420 -i 4 00:11:40.252 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:40.252 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:11:40.253 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:40.253 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:11:40.253 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:11:40.253 13:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:42.793 [ 0]:0x1 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0e0041bd50444759856dcf289ee9d915 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0e0041bd50444759856dcf289ee9d915 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:42.793 [ 1]:0x2 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=63cbdae8933b4f84a6f4822a3ae672e7 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 63cbdae8933b4f84a6f4822a3ae672e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:42.793 [ 0]:0x2 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=63cbdae8933b4f84a6f4822a3ae672e7 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 63cbdae8933b4f84a6f4822a3ae672e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:42.793 13:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:42.793 [2024-11-06 13:55:22.014207] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:42.793 request: 00:11:42.793 { 00:11:42.793 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:42.793 "nsid": 2, 00:11:42.793 "host": "nqn.2016-06.io.spdk:host1", 00:11:42.793 "method": "nvmf_ns_remove_host", 00:11:42.793 "req_id": 1 00:11:42.793 } 00:11:42.793 Got JSON-RPC error response 00:11:42.793 response: 00:11:42.793 { 00:11:42.793 "code": -32602, 00:11:42.793 "message": "Invalid parameters" 00:11:42.793 } 00:11:42.793 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:42.793 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:42.793 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:42.793 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:42.793 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:42.793 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:42.793 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:11:42.793 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:11:42.793 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:42.793 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:11:42.793 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:42.794 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:11:42.794 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:42.794 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:42.794 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:42.794 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:43.053 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:43.053 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:43.053 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:43.053 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:43.053 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:43.053 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:43.053 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:43.053 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:43.053 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:43.053 [ 0]:0x2 00:11:43.053 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:43.053 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:43.053 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=63cbdae8933b4f84a6f4822a3ae672e7 00:11:43.053 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 63cbdae8933b4f84a6f4822a3ae672e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:43.053 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:43.053 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:43.053 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.053 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=791019 00:11:43.053 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.054 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 791019 /var/tmp/host.sock 00:11:43.054 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 791019 ']' 00:11:43.054 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:11:43.054 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:43.054 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:43.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:43.054 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:43.054 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:43.054 13:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:43.313 [2024-11-06 13:55:22.358585] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:11:43.313 [2024-11-06 13:55:22.358637] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid791019 ] 00:11:43.313 [2024-11-06 13:55:22.439292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.313 [2024-11-06 13:55:22.475171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.883 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:43.883 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:11:43.883 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:44.174 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:44.174 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 0fbe38ba-19a8-4bd5-b598-2cdaef4efb93 00:11:44.174 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:11:44.434 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 0FBE38BA19A84BD5B5982CDAEF4EFB93 -i 00:11:44.434 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 8a902e4f-38ff-4882-a660-4d5a0e37a081 00:11:44.434 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:11:44.434 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 8A902E4F38FF4882A6604D5A0E37A081 -i 00:11:44.693 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:44.693 13:55:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:44.953 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:44.953 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:45.213 nvme0n1 00:11:45.213 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:45.213 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:45.782 nvme1n2 00:11:45.782 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:11:45.782 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:11:45.782 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:11:45.782 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:11:45.782 13:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:45.782 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:11:45.782 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:11:45.782 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:11:45.782 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:11:46.042 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 0fbe38ba-19a8-4bd5-b598-2cdaef4efb93 == \0\f\b\e\3\8\b\a\-\1\9\a\8\-\4\b\d\5\-\b\5\9\8\-\2\c\d\a\e\f\4\e\f\b\9\3 ]] 00:11:46.042 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:11:46.042 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:11:46.042 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:11:46.302 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 8a902e4f-38ff-4882-a660-4d5a0e37a081 == \8\a\9\0\2\e\4\f\-\3\8\f\f\-\4\8\8\2\-\a\6\6\0\-\4\d\5\a\0\e\3\7\a\0\8\1 ]] 00:11:46.302 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.302 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:46.563 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 0fbe38ba-19a8-4bd5-b598-2cdaef4efb93 00:11:46.563 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:11:46.563 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 0FBE38BA19A84BD5B5982CDAEF4EFB93 00:11:46.563 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:46.563 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 0FBE38BA19A84BD5B5982CDAEF4EFB93 00:11:46.563 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:46.563 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:46.563 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:46.563 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:46.563 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:46.563 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:46.563 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:46.563 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:46.563 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 0FBE38BA19A84BD5B5982CDAEF4EFB93 00:11:46.563 [2024-11-06 13:55:25.816199] bdev.c:8613:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:11:46.563 [2024-11-06 13:55:25.816230] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:11:46.563 [2024-11-06 13:55:25.816237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.563 request: 00:11:46.563 { 00:11:46.563 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:46.563 "namespace": { 00:11:46.563 "bdev_name": "invalid", 00:11:46.563 "nsid": 1, 00:11:46.563 "nguid": "0FBE38BA19A84BD5B5982CDAEF4EFB93", 00:11:46.563 "no_auto_visible": false 00:11:46.563 }, 00:11:46.563 "method": "nvmf_subsystem_add_ns", 00:11:46.563 "req_id": 1 00:11:46.563 } 00:11:46.563 Got JSON-RPC error response 00:11:46.563 response: 00:11:46.563 { 00:11:46.563 "code": -32602, 00:11:46.563 "message": "Invalid parameters" 00:11:46.563 } 00:11:46.563 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:46.563 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:46.563 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:46.563 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:46.563 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 0fbe38ba-19a8-4bd5-b598-2cdaef4efb93 00:11:46.563 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:11:46.563 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 0FBE38BA19A84BD5B5982CDAEF4EFB93 -i 00:11:46.823 13:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:11:48.730 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:11:48.730 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:48.730 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:11:48.990 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:11:48.990 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 791019 00:11:48.990 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 791019 ']' 00:11:48.990 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 791019 00:11:48.990 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:11:48.990 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:48.990 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 791019 00:11:48.990 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:11:48.990 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:11:48.990 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 791019' 00:11:48.990 killing process with pid 791019 00:11:48.990 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 791019 00:11:48.990 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 791019 00:11:49.250 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.511 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:11:49.511 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:11:49.511 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:49.511 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:11:49.511 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:49.511 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:11:49.511 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:49.511 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:49.511 rmmod nvme_tcp 00:11:49.511 rmmod nvme_fabrics 00:11:49.511 rmmod nvme_keyring 00:11:49.511 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:49.511 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:11:49.511 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:11:49.511 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 788525 ']' 00:11:49.511 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 788525 00:11:49.511 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 788525 ']' 00:11:49.511 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 788525 00:11:49.511 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:11:49.511 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:49.511 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 788525 00:11:49.511 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:49.511 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:49.511 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 788525' 00:11:49.511 killing process with pid 788525 00:11:49.511 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 788525 00:11:49.511 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 788525 00:11:49.511 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:49.511 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:49.511 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:49.511 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:11:49.511 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:11:49.511 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:49.511 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:11:49.511 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:49.511 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:49.511 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.511 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:49.511 13:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.109 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:52.109 00:11:52.109 real 0m24.631s 00:11:52.109 user 0m29.021s 00:11:52.109 sys 0m5.939s 00:11:52.109 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:52.109 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:52.109 ************************************ 00:11:52.109 END TEST nvmf_ns_masking 00:11:52.109 ************************************ 00:11:52.109 13:55:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:11:52.109 13:55:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:52.109 13:55:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:52.109 13:55:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:52.109 13:55:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:52.109 ************************************ 00:11:52.109 START TEST nvmf_nvme_cli 00:11:52.109 ************************************ 00:11:52.109 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:52.109 * Looking for test storage... 00:11:52.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:52.109 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:52.109 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:11:52.109 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:52.109 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:52.109 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:52.109 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:52.109 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:52.109 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:11:52.109 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:11:52.109 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:11:52.109 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:11:52.109 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:11:52.109 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:11:52.109 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:11:52.109 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:52.109 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:11:52.109 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:11:52.109 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:52.109 13:55:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:52.109 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:11:52.109 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:11:52.109 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:52.109 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:11:52.109 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:11:52.109 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:11:52.109 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:11:52.109 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:52.109 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:11:52.109 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:11:52.109 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:52.109 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:52.109 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:11:52.109 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:52.109 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:52.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.109 --rc genhtml_branch_coverage=1 00:11:52.109 --rc genhtml_function_coverage=1 00:11:52.109 --rc genhtml_legend=1 00:11:52.109 --rc geninfo_all_blocks=1 00:11:52.109 --rc geninfo_unexecuted_blocks=1 00:11:52.109 00:11:52.109 ' 00:11:52.109 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:52.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.109 --rc genhtml_branch_coverage=1 00:11:52.109 --rc genhtml_function_coverage=1 00:11:52.109 --rc genhtml_legend=1 00:11:52.109 --rc geninfo_all_blocks=1 00:11:52.109 --rc geninfo_unexecuted_blocks=1 00:11:52.109 00:11:52.109 ' 00:11:52.109 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:52.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.109 --rc genhtml_branch_coverage=1 00:11:52.109 --rc genhtml_function_coverage=1 00:11:52.109 --rc genhtml_legend=1 00:11:52.109 --rc geninfo_all_blocks=1 00:11:52.109 --rc geninfo_unexecuted_blocks=1 00:11:52.109 00:11:52.109 ' 00:11:52.109 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:52.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.109 --rc genhtml_branch_coverage=1 00:11:52.109 --rc genhtml_function_coverage=1 00:11:52.109 --rc genhtml_legend=1 00:11:52.109 --rc geninfo_all_blocks=1 00:11:52.110 --rc geninfo_unexecuted_blocks=1 00:11:52.110 00:11:52.110 ' 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:52.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:11:52.110 13:55:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:57.442 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:57.442 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:57.442 Found net devices under 0000:31:00.0: cvl_0_0 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:57.442 Found net devices under 0000:31:00.1: cvl_0_1 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:57.442 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:57.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:57.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.555 ms 00:11:57.443 00:11:57.443 --- 10.0.0.2 ping statistics --- 00:11:57.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.443 rtt min/avg/max/mdev = 0.555/0.555/0.555/0.000 ms 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:57.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:57.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:11:57.443 00:11:57.443 --- 10.0.0.1 ping statistics --- 00:11:57.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.443 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=796748 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 796748 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 796748 ']' 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:57.443 13:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:57.443 [2024-11-06 13:55:36.425752] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:11:57.443 [2024-11-06 13:55:36.425801] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.443 [2024-11-06 13:55:36.510180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:57.443 [2024-11-06 13:55:36.548901] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:57.443 [2024-11-06 13:55:36.548933] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:57.443 [2024-11-06 13:55:36.548941] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:57.443 [2024-11-06 13:55:36.548948] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:57.443 [2024-11-06 13:55:36.548954] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:57.443 [2024-11-06 13:55:36.550463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.443 [2024-11-06 13:55:36.550613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:57.443 [2024-11-06 13:55:36.550750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.443 [2024-11-06 13:55:36.550752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:58.013 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:58.013 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:11:58.013 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:58.013 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:58.013 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:58.013 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:58.013 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:58.013 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.014 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:58.014 [2024-11-06 13:55:37.266456] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:58.014 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.014 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:58.014 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.014 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:58.274 Malloc0 00:11:58.274 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.274 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:58.274 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.274 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:58.274 Malloc1 00:11:58.274 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.274 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:58.274 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.274 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:58.274 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.274 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:58.274 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.274 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:58.274 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.274 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:58.274 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.274 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:58.274 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.274 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:58.274 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.274 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:58.274 [2024-11-06 13:55:37.354922] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:58.274 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.274 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:58.274 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.274 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:58.274 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.274 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 4420 00:11:58.274 00:11:58.274 Discovery Log Number of Records 2, Generation counter 2 00:11:58.274 =====Discovery Log Entry 0====== 00:11:58.274 trtype: tcp 00:11:58.274 adrfam: ipv4 00:11:58.274 subtype: current discovery subsystem 00:11:58.274 treq: not required 00:11:58.274 portid: 0 00:11:58.274 trsvcid: 4420 00:11:58.274 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:58.274 traddr: 10.0.0.2 00:11:58.274 eflags: explicit discovery connections, duplicate discovery information 00:11:58.274 sectype: none 00:11:58.274 =====Discovery Log Entry 1====== 00:11:58.274 trtype: tcp 00:11:58.274 adrfam: ipv4 00:11:58.274 subtype: nvme subsystem 00:11:58.274 treq: not required 00:11:58.274 portid: 0 00:11:58.274 trsvcid: 4420 00:11:58.274 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:58.274 traddr: 10.0.0.2 00:11:58.274 eflags: none 00:11:58.274 sectype: none 00:11:58.274 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:58.274 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:58.274 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:11:58.274 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:58.274 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:11:58.274 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:11:58.274 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:58.274 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:11:58.274 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:58.274 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:58.274 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:00.179 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:00.179 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:12:00.179 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:00.179 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:12:00.179 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:12:00.179 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:12:02.086 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:02.086 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:02.086 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:02.086 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:12:02.086 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:02.086 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:12:02.086 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:02.086 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:02.086 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:02.086 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:02.086 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:02.086 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:02.086 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:02.086 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:02.086 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:02.086 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:12:02.086 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:02.086 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:02.086 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:12:02.086 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:02.086 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:12:02.086 /dev/nvme0n2 ]] 00:12:02.086 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:02.086 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:02.086 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:02.086 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:02.086 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:02.086 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:02.086 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:02.086 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:02.086 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:02.086 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:02.086 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:12:02.086 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:02.086 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:02.086 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:12:02.086 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:02.086 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:02.086 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:02.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.345 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:02.345 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:12:02.345 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:02.345 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.345 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:02.345 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.345 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:12:02.345 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:02.345 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:02.345 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.345 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:02.345 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.345 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:02.345 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:02.345 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:02.345 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:12:02.345 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:02.345 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:12:02.345 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:02.345 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:02.345 rmmod nvme_tcp 00:12:02.345 rmmod nvme_fabrics 00:12:02.345 rmmod nvme_keyring 00:12:02.345 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:02.345 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:12:02.345 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:12:02.345 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 796748 ']' 00:12:02.345 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 796748 00:12:02.345 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 796748 ']' 00:12:02.345 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 796748 00:12:02.345 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:12:02.345 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:02.345 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 796748 00:12:02.604 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:02.604 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:02.604 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 796748' 00:12:02.605 killing process with pid 796748 00:12:02.605 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 796748 00:12:02.605 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 796748 00:12:02.605 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:02.605 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:02.605 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:02.605 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:12:02.605 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:02.605 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:12:02.605 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:12:02.605 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:02.605 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:02.605 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.605 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:02.605 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.141 13:55:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:05.141 00:12:05.142 real 0m12.966s 00:12:05.142 user 0m22.370s 00:12:05.142 sys 0m4.579s 00:12:05.142 13:55:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:05.142 13:55:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:05.142 ************************************ 00:12:05.142 END TEST nvmf_nvme_cli 00:12:05.142 ************************************ 00:12:05.142 13:55:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:12:05.142 13:55:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:05.142 13:55:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:05.142 13:55:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:05.142 13:55:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:05.142 ************************************ 00:12:05.142 START TEST nvmf_vfio_user 00:12:05.142 ************************************ 00:12:05.142 13:55:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:05.142 * Looking for test storage... 00:12:05.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:05.142 13:55:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:05.142 13:55:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:12:05.142 13:55:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:05.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.142 --rc genhtml_branch_coverage=1 00:12:05.142 --rc genhtml_function_coverage=1 00:12:05.142 --rc genhtml_legend=1 00:12:05.142 --rc geninfo_all_blocks=1 00:12:05.142 --rc geninfo_unexecuted_blocks=1 00:12:05.142 00:12:05.142 ' 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:05.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.142 --rc genhtml_branch_coverage=1 00:12:05.142 --rc genhtml_function_coverage=1 00:12:05.142 --rc genhtml_legend=1 00:12:05.142 --rc geninfo_all_blocks=1 00:12:05.142 --rc geninfo_unexecuted_blocks=1 00:12:05.142 00:12:05.142 ' 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:05.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.142 --rc genhtml_branch_coverage=1 00:12:05.142 --rc genhtml_function_coverage=1 00:12:05.142 --rc genhtml_legend=1 00:12:05.142 --rc geninfo_all_blocks=1 00:12:05.142 --rc geninfo_unexecuted_blocks=1 00:12:05.142 00:12:05.142 ' 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:05.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.142 --rc genhtml_branch_coverage=1 00:12:05.142 --rc genhtml_function_coverage=1 00:12:05.142 --rc genhtml_legend=1 00:12:05.142 --rc geninfo_all_blocks=1 00:12:05.142 --rc geninfo_unexecuted_blocks=1 00:12:05.142 00:12:05.142 ' 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.142 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:05.143 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.143 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:12:05.143 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:05.143 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:05.143 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.143 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.143 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.143 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:05.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:05.143 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:05.143 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:05.143 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:05.143 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:05.143 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:05.143 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:05.143 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:05.143 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:05.143 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:05.143 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:05.143 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:05.143 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:05.143 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:05.143 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=798573 00:12:05.143 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 798573' 00:12:05.143 Process pid: 798573 00:12:05.143 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:05.143 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 798573 00:12:05.143 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 798573 ']' 00:12:05.143 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.143 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:05.143 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.143 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:05.143 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:05.143 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:05.143 [2024-11-06 13:55:44.076240] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:12:05.143 [2024-11-06 13:55:44.076315] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.143 [2024-11-06 13:55:44.148601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:05.143 [2024-11-06 13:55:44.186897] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.143 [2024-11-06 13:55:44.186934] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.143 [2024-11-06 13:55:44.186940] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.143 [2024-11-06 13:55:44.186945] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.143 [2024-11-06 13:55:44.186949] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.143 [2024-11-06 13:55:44.188325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.143 [2024-11-06 13:55:44.188482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.143 [2024-11-06 13:55:44.188638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.143 [2024-11-06 13:55:44.188639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:05.711 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:05.711 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:12:05.711 13:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:06.646 13:55:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:06.904 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:06.904 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:06.904 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:06.904 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:06.904 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:07.163 Malloc1 00:12:07.163 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:07.163 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:07.422 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:07.422 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:07.422 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:07.422 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:07.680 Malloc2 00:12:07.680 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:07.939 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:07.939 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:08.199 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:08.199 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:08.199 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:08.199 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:08.199 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:08.200 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:08.200 [2024-11-06 13:55:47.342657] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:12:08.200 [2024-11-06 13:55:47.342686] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid799270 ] 00:12:08.200 [2024-11-06 13:55:47.381595] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:08.200 [2024-11-06 13:55:47.383899] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:08.200 [2024-11-06 13:55:47.383917] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9eb9694000 00:12:08.200 [2024-11-06 13:55:47.384918] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:08.200 [2024-11-06 13:55:47.385905] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:08.200 [2024-11-06 13:55:47.386909] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:08.200 [2024-11-06 13:55:47.387911] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:08.200 [2024-11-06 13:55:47.388914] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:08.200 [2024-11-06 13:55:47.389916] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:08.200 [2024-11-06 13:55:47.390918] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:08.200 [2024-11-06 13:55:47.391925] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:08.200 [2024-11-06 13:55:47.392935] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:08.200 [2024-11-06 13:55:47.392942] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9eb9689000 00:12:08.200 [2024-11-06 13:55:47.393855] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:08.200 [2024-11-06 13:55:47.406169] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:08.200 [2024-11-06 13:55:47.406190] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:12:08.200 [2024-11-06 13:55:47.411031] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:08.200 [2024-11-06 13:55:47.411066] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:08.200 [2024-11-06 13:55:47.411126] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:12:08.200 [2024-11-06 13:55:47.411137] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:12:08.200 [2024-11-06 13:55:47.411141] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:12:08.200 [2024-11-06 13:55:47.412040] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:08.200 [2024-11-06 13:55:47.412047] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:12:08.200 [2024-11-06 13:55:47.412052] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:12:08.200 [2024-11-06 13:55:47.413043] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:08.200 [2024-11-06 13:55:47.413052] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:12:08.200 [2024-11-06 13:55:47.413058] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:12:08.200 [2024-11-06 13:55:47.414052] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:08.200 [2024-11-06 13:55:47.414058] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:08.200 [2024-11-06 13:55:47.415051] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:08.200 [2024-11-06 13:55:47.415057] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:12:08.200 [2024-11-06 13:55:47.415060] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:12:08.200 [2024-11-06 13:55:47.415065] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:08.200 [2024-11-06 13:55:47.415171] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:12:08.200 [2024-11-06 13:55:47.415174] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:08.200 [2024-11-06 13:55:47.415178] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:08.200 [2024-11-06 13:55:47.416060] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:08.200 [2024-11-06 13:55:47.417060] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:08.200 [2024-11-06 13:55:47.418067] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:08.200 [2024-11-06 13:55:47.419068] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:08.200 [2024-11-06 13:55:47.419118] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:08.200 [2024-11-06 13:55:47.420083] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:08.200 [2024-11-06 13:55:47.420089] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:08.200 [2024-11-06 13:55:47.420093] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:12:08.200 [2024-11-06 13:55:47.420107] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:12:08.200 [2024-11-06 13:55:47.420113] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:12:08.200 [2024-11-06 13:55:47.420123] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:08.200 [2024-11-06 13:55:47.420127] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:08.200 [2024-11-06 13:55:47.420129] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:08.200 [2024-11-06 13:55:47.420139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:08.200 [2024-11-06 13:55:47.420171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:08.200 [2024-11-06 13:55:47.420178] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:12:08.200 [2024-11-06 13:55:47.420182] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:12:08.200 [2024-11-06 13:55:47.420185] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:12:08.200 [2024-11-06 13:55:47.420188] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:08.200 [2024-11-06 13:55:47.420193] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:12:08.200 [2024-11-06 13:55:47.420196] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:12:08.200 [2024-11-06 13:55:47.420199] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:12:08.200 [2024-11-06 13:55:47.420208] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:12:08.200 [2024-11-06 13:55:47.420215] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:08.200 [2024-11-06 13:55:47.420228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:08.200 [2024-11-06 13:55:47.420236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.200 [2024-11-06 13:55:47.420243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.200 [2024-11-06 13:55:47.420253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.200 [2024-11-06 13:55:47.420259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.200 [2024-11-06 13:55:47.420262] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:12:08.200 [2024-11-06 13:55:47.420267] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:08.200 [2024-11-06 13:55:47.420274] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:08.200 [2024-11-06 13:55:47.420282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:08.200 [2024-11-06 13:55:47.420288] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:12:08.200 [2024-11-06 13:55:47.420291] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:08.200 [2024-11-06 13:55:47.420296] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:12:08.200 [2024-11-06 13:55:47.420300] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:12:08.201 [2024-11-06 13:55:47.420307] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:08.201 [2024-11-06 13:55:47.420315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:08.201 [2024-11-06 13:55:47.420358] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:12:08.201 [2024-11-06 13:55:47.420364] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:12:08.201 [2024-11-06 13:55:47.420369] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:08.201 [2024-11-06 13:55:47.420372] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:08.201 [2024-11-06 13:55:47.420374] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:08.201 [2024-11-06 13:55:47.420379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:08.201 [2024-11-06 13:55:47.420390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:08.201 [2024-11-06 13:55:47.420397] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:12:08.201 [2024-11-06 13:55:47.420406] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:12:08.201 [2024-11-06 13:55:47.420412] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:12:08.201 [2024-11-06 13:55:47.420417] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:08.201 [2024-11-06 13:55:47.420420] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:08.201 [2024-11-06 13:55:47.420422] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:08.201 [2024-11-06 13:55:47.420426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:08.201 [2024-11-06 13:55:47.420442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:08.201 [2024-11-06 13:55:47.420451] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:08.201 [2024-11-06 13:55:47.420457] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:08.201 [2024-11-06 13:55:47.420462] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:08.201 [2024-11-06 13:55:47.420465] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:08.201 [2024-11-06 13:55:47.420467] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:08.201 [2024-11-06 13:55:47.420471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:08.201 [2024-11-06 13:55:47.420484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:08.201 [2024-11-06 13:55:47.420490] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:08.201 [2024-11-06 13:55:47.420494] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:12:08.201 [2024-11-06 13:55:47.420500] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:12:08.201 [2024-11-06 13:55:47.420504] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:12:08.201 [2024-11-06 13:55:47.420509] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:08.201 [2024-11-06 13:55:47.420513] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:12:08.201 [2024-11-06 13:55:47.420516] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:12:08.201 [2024-11-06 13:55:47.420519] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:12:08.201 [2024-11-06 13:55:47.420523] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:12:08.201 [2024-11-06 13:55:47.420535] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:08.201 [2024-11-06 13:55:47.420544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:08.201 [2024-11-06 13:55:47.420552] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:08.201 [2024-11-06 13:55:47.420559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:08.201 [2024-11-06 13:55:47.420567] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:08.201 [2024-11-06 13:55:47.420574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:08.201 [2024-11-06 13:55:47.420582] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:08.201 [2024-11-06 13:55:47.420588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:08.201 [2024-11-06 13:55:47.420598] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:08.201 [2024-11-06 13:55:47.420602] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:08.201 [2024-11-06 13:55:47.420604] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:08.201 [2024-11-06 13:55:47.420606] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:08.201 [2024-11-06 13:55:47.420609] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:12:08.201 [2024-11-06 13:55:47.420613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:08.201 [2024-11-06 13:55:47.420619] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:08.201 [2024-11-06 13:55:47.420622] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:08.201 [2024-11-06 13:55:47.420624] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:08.201 [2024-11-06 13:55:47.420628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:08.201 [2024-11-06 13:55:47.420634] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:08.201 [2024-11-06 13:55:47.420636] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:08.201 [2024-11-06 13:55:47.420639] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:08.201 [2024-11-06 13:55:47.420643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:08.201 [2024-11-06 13:55:47.420650] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:08.201 [2024-11-06 13:55:47.420653] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:08.201 [2024-11-06 13:55:47.420655] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:08.201 [2024-11-06 13:55:47.420659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:08.201 [2024-11-06 13:55:47.420664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:08.201 [2024-11-06 13:55:47.420673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:08.201 [2024-11-06 13:55:47.420681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:08.201 [2024-11-06 13:55:47.420686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:08.201 ===================================================== 00:12:08.201 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:08.201 ===================================================== 00:12:08.201 Controller Capabilities/Features 00:12:08.201 ================================ 00:12:08.201 Vendor ID: 4e58 00:12:08.201 Subsystem Vendor ID: 4e58 00:12:08.201 Serial Number: SPDK1 00:12:08.201 Model Number: SPDK bdev Controller 00:12:08.201 Firmware Version: 25.01 00:12:08.201 Recommended Arb Burst: 6 00:12:08.201 IEEE OUI Identifier: 8d 6b 50 00:12:08.201 Multi-path I/O 00:12:08.201 May have multiple subsystem ports: Yes 00:12:08.201 May have multiple controllers: Yes 00:12:08.201 Associated with SR-IOV VF: No 00:12:08.201 Max Data Transfer Size: 131072 00:12:08.201 Max Number of Namespaces: 32 00:12:08.201 Max Number of I/O Queues: 127 00:12:08.201 NVMe Specification Version (VS): 1.3 00:12:08.201 NVMe Specification Version (Identify): 1.3 00:12:08.201 Maximum Queue Entries: 256 00:12:08.201 Contiguous Queues Required: Yes 00:12:08.201 Arbitration Mechanisms Supported 00:12:08.201 Weighted Round Robin: Not Supported 00:12:08.201 Vendor Specific: Not Supported 00:12:08.201 Reset Timeout: 15000 ms 00:12:08.201 Doorbell Stride: 4 bytes 00:12:08.201 NVM Subsystem Reset: Not Supported 00:12:08.201 Command Sets Supported 00:12:08.201 NVM Command Set: Supported 00:12:08.202 Boot Partition: Not Supported 00:12:08.202 Memory Page Size Minimum: 4096 bytes 00:12:08.202 Memory Page Size Maximum: 4096 bytes 00:12:08.202 Persistent Memory Region: Not Supported 00:12:08.202 Optional Asynchronous Events Supported 00:12:08.202 Namespace Attribute Notices: Supported 00:12:08.202 Firmware Activation Notices: Not Supported 00:12:08.202 ANA Change Notices: Not Supported 00:12:08.202 PLE Aggregate Log Change Notices: Not Supported 00:12:08.202 LBA Status Info Alert Notices: Not Supported 00:12:08.202 EGE Aggregate Log Change Notices: Not Supported 00:12:08.202 Normal NVM Subsystem Shutdown event: Not Supported 00:12:08.202 Zone Descriptor Change Notices: Not Supported 00:12:08.202 Discovery Log Change Notices: Not Supported 00:12:08.202 Controller Attributes 00:12:08.202 128-bit Host Identifier: Supported 00:12:08.202 Non-Operational Permissive Mode: Not Supported 00:12:08.202 NVM Sets: Not Supported 00:12:08.202 Read Recovery Levels: Not Supported 00:12:08.202 Endurance Groups: Not Supported 00:12:08.202 Predictable Latency Mode: Not Supported 00:12:08.202 Traffic Based Keep ALive: Not Supported 00:12:08.202 Namespace Granularity: Not Supported 00:12:08.202 SQ Associations: Not Supported 00:12:08.202 UUID List: Not Supported 00:12:08.202 Multi-Domain Subsystem: Not Supported 00:12:08.202 Fixed Capacity Management: Not Supported 00:12:08.202 Variable Capacity Management: Not Supported 00:12:08.202 Delete Endurance Group: Not Supported 00:12:08.202 Delete NVM Set: Not Supported 00:12:08.202 Extended LBA Formats Supported: Not Supported 00:12:08.202 Flexible Data Placement Supported: Not Supported 00:12:08.202 00:12:08.202 Controller Memory Buffer Support 00:12:08.202 ================================ 00:12:08.202 Supported: No 00:12:08.202 00:12:08.202 Persistent Memory Region Support 00:12:08.202 ================================ 00:12:08.202 Supported: No 00:12:08.202 00:12:08.202 Admin Command Set Attributes 00:12:08.202 ============================ 00:12:08.202 Security Send/Receive: Not Supported 00:12:08.202 Format NVM: Not Supported 00:12:08.202 Firmware Activate/Download: Not Supported 00:12:08.202 Namespace Management: Not Supported 00:12:08.202 Device Self-Test: Not Supported 00:12:08.202 Directives: Not Supported 00:12:08.202 NVMe-MI: Not Supported 00:12:08.202 Virtualization Management: Not Supported 00:12:08.202 Doorbell Buffer Config: Not Supported 00:12:08.202 Get LBA Status Capability: Not Supported 00:12:08.202 Command & Feature Lockdown Capability: Not Supported 00:12:08.202 Abort Command Limit: 4 00:12:08.202 Async Event Request Limit: 4 00:12:08.202 Number of Firmware Slots: N/A 00:12:08.202 Firmware Slot 1 Read-Only: N/A 00:12:08.202 Firmware Activation Without Reset: N/A 00:12:08.202 Multiple Update Detection Support: N/A 00:12:08.202 Firmware Update Granularity: No Information Provided 00:12:08.202 Per-Namespace SMART Log: No 00:12:08.202 Asymmetric Namespace Access Log Page: Not Supported 00:12:08.202 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:08.202 Command Effects Log Page: Supported 00:12:08.202 Get Log Page Extended Data: Supported 00:12:08.202 Telemetry Log Pages: Not Supported 00:12:08.202 Persistent Event Log Pages: Not Supported 00:12:08.202 Supported Log Pages Log Page: May Support 00:12:08.202 Commands Supported & Effects Log Page: Not Supported 00:12:08.202 Feature Identifiers & Effects Log Page:May Support 00:12:08.202 NVMe-MI Commands & Effects Log Page: May Support 00:12:08.202 Data Area 4 for Telemetry Log: Not Supported 00:12:08.202 Error Log Page Entries Supported: 128 00:12:08.202 Keep Alive: Supported 00:12:08.202 Keep Alive Granularity: 10000 ms 00:12:08.202 00:12:08.202 NVM Command Set Attributes 00:12:08.202 ========================== 00:12:08.202 Submission Queue Entry Size 00:12:08.202 Max: 64 00:12:08.202 Min: 64 00:12:08.202 Completion Queue Entry Size 00:12:08.202 Max: 16 00:12:08.202 Min: 16 00:12:08.202 Number of Namespaces: 32 00:12:08.202 Compare Command: Supported 00:12:08.202 Write Uncorrectable Command: Not Supported 00:12:08.202 Dataset Management Command: Supported 00:12:08.202 Write Zeroes Command: Supported 00:12:08.202 Set Features Save Field: Not Supported 00:12:08.202 Reservations: Not Supported 00:12:08.202 Timestamp: Not Supported 00:12:08.202 Copy: Supported 00:12:08.202 Volatile Write Cache: Present 00:12:08.202 Atomic Write Unit (Normal): 1 00:12:08.202 Atomic Write Unit (PFail): 1 00:12:08.202 Atomic Compare & Write Unit: 1 00:12:08.202 Fused Compare & Write: Supported 00:12:08.202 Scatter-Gather List 00:12:08.202 SGL Command Set: Supported (Dword aligned) 00:12:08.202 SGL Keyed: Not Supported 00:12:08.202 SGL Bit Bucket Descriptor: Not Supported 00:12:08.202 SGL Metadata Pointer: Not Supported 00:12:08.202 Oversized SGL: Not Supported 00:12:08.202 SGL Metadata Address: Not Supported 00:12:08.202 SGL Offset: Not Supported 00:12:08.202 Transport SGL Data Block: Not Supported 00:12:08.202 Replay Protected Memory Block: Not Supported 00:12:08.202 00:12:08.202 Firmware Slot Information 00:12:08.202 ========================= 00:12:08.202 Active slot: 1 00:12:08.202 Slot 1 Firmware Revision: 25.01 00:12:08.202 00:12:08.202 00:12:08.202 Commands Supported and Effects 00:12:08.202 ============================== 00:12:08.202 Admin Commands 00:12:08.202 -------------- 00:12:08.202 Get Log Page (02h): Supported 00:12:08.202 Identify (06h): Supported 00:12:08.202 Abort (08h): Supported 00:12:08.202 Set Features (09h): Supported 00:12:08.202 Get Features (0Ah): Supported 00:12:08.202 Asynchronous Event Request (0Ch): Supported 00:12:08.202 Keep Alive (18h): Supported 00:12:08.202 I/O Commands 00:12:08.202 ------------ 00:12:08.202 Flush (00h): Supported LBA-Change 00:12:08.202 Write (01h): Supported LBA-Change 00:12:08.202 Read (02h): Supported 00:12:08.202 Compare (05h): Supported 00:12:08.202 Write Zeroes (08h): Supported LBA-Change 00:12:08.202 Dataset Management (09h): Supported LBA-Change 00:12:08.202 Copy (19h): Supported LBA-Change 00:12:08.202 00:12:08.202 Error Log 00:12:08.202 ========= 00:12:08.202 00:12:08.202 Arbitration 00:12:08.202 =========== 00:12:08.202 Arbitration Burst: 1 00:12:08.202 00:12:08.202 Power Management 00:12:08.202 ================ 00:12:08.202 Number of Power States: 1 00:12:08.202 Current Power State: Power State #0 00:12:08.202 Power State #0: 00:12:08.202 Max Power: 0.00 W 00:12:08.202 Non-Operational State: Operational 00:12:08.202 Entry Latency: Not Reported 00:12:08.202 Exit Latency: Not Reported 00:12:08.202 Relative Read Throughput: 0 00:12:08.202 Relative Read Latency: 0 00:12:08.202 Relative Write Throughput: 0 00:12:08.202 Relative Write Latency: 0 00:12:08.202 Idle Power: Not Reported 00:12:08.202 Active Power: Not Reported 00:12:08.202 Non-Operational Permissive Mode: Not Supported 00:12:08.202 00:12:08.202 Health Information 00:12:08.202 ================== 00:12:08.202 Critical Warnings: 00:12:08.202 Available Spare Space: OK 00:12:08.202 Temperature: OK 00:12:08.202 Device Reliability: OK 00:12:08.202 Read Only: No 00:12:08.202 Volatile Memory Backup: OK 00:12:08.202 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:08.202 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:08.202 Available Spare: 0% 00:12:08.202 Available Sp[2024-11-06 13:55:47.420764] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:08.202 [2024-11-06 13:55:47.420771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:08.202 [2024-11-06 13:55:47.420790] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:12:08.202 [2024-11-06 13:55:47.420797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.202 [2024-11-06 13:55:47.420802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.202 [2024-11-06 13:55:47.420807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.202 [2024-11-06 13:55:47.420811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.202 [2024-11-06 13:55:47.424250] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:08.202 [2024-11-06 13:55:47.424258] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:08.202 [2024-11-06 13:55:47.425118] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:08.202 [2024-11-06 13:55:47.425156] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:12:08.202 [2024-11-06 13:55:47.425161] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:12:08.203 [2024-11-06 13:55:47.426121] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:08.203 [2024-11-06 13:55:47.426129] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:12:08.203 [2024-11-06 13:55:47.426180] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:08.203 [2024-11-06 13:55:47.427145] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:08.203 are Threshold: 0% 00:12:08.203 Life Percentage Used: 0% 00:12:08.203 Data Units Read: 0 00:12:08.203 Data Units Written: 0 00:12:08.203 Host Read Commands: 0 00:12:08.203 Host Write Commands: 0 00:12:08.203 Controller Busy Time: 0 minutes 00:12:08.203 Power Cycles: 0 00:12:08.203 Power On Hours: 0 hours 00:12:08.203 Unsafe Shutdowns: 0 00:12:08.203 Unrecoverable Media Errors: 0 00:12:08.203 Lifetime Error Log Entries: 0 00:12:08.203 Warning Temperature Time: 0 minutes 00:12:08.203 Critical Temperature Time: 0 minutes 00:12:08.203 00:12:08.203 Number of Queues 00:12:08.203 ================ 00:12:08.203 Number of I/O Submission Queues: 127 00:12:08.203 Number of I/O Completion Queues: 127 00:12:08.203 00:12:08.203 Active Namespaces 00:12:08.203 ================= 00:12:08.203 Namespace ID:1 00:12:08.203 Error Recovery Timeout: Unlimited 00:12:08.203 Command Set Identifier: NVM (00h) 00:12:08.203 Deallocate: Supported 00:12:08.203 Deallocated/Unwritten Error: Not Supported 00:12:08.203 Deallocated Read Value: Unknown 00:12:08.203 Deallocate in Write Zeroes: Not Supported 00:12:08.203 Deallocated Guard Field: 0xFFFF 00:12:08.203 Flush: Supported 00:12:08.203 Reservation: Supported 00:12:08.203 Namespace Sharing Capabilities: Multiple Controllers 00:12:08.203 Size (in LBAs): 131072 (0GiB) 00:12:08.203 Capacity (in LBAs): 131072 (0GiB) 00:12:08.203 Utilization (in LBAs): 131072 (0GiB) 00:12:08.203 NGUID: 426DF96D4A7840148E8C1EF238BE06F2 00:12:08.203 UUID: 426df96d-4a78-4014-8e8c-1ef238be06f2 00:12:08.203 Thin Provisioning: Not Supported 00:12:08.203 Per-NS Atomic Units: Yes 00:12:08.203 Atomic Boundary Size (Normal): 0 00:12:08.203 Atomic Boundary Size (PFail): 0 00:12:08.203 Atomic Boundary Offset: 0 00:12:08.203 Maximum Single Source Range Length: 65535 00:12:08.203 Maximum Copy Length: 65535 00:12:08.203 Maximum Source Range Count: 1 00:12:08.203 NGUID/EUI64 Never Reused: No 00:12:08.203 Namespace Write Protected: No 00:12:08.203 Number of LBA Formats: 1 00:12:08.203 Current LBA Format: LBA Format #00 00:12:08.203 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:08.203 00:12:08.203 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:08.462 [2024-11-06 13:55:47.594923] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:13.736 Initializing NVMe Controllers 00:12:13.736 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:13.736 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:13.736 Initialization complete. Launching workers. 00:12:13.736 ======================================================== 00:12:13.736 Latency(us) 00:12:13.736 Device Information : IOPS MiB/s Average min max 00:12:13.736 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39999.71 156.25 3199.70 839.23 9797.45 00:12:13.736 ======================================================== 00:12:13.736 Total : 39999.71 156.25 3199.70 839.23 9797.45 00:12:13.736 00:12:13.736 [2024-11-06 13:55:52.615609] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:13.736 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:13.736 [2024-11-06 13:55:52.795414] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:19.012 Initializing NVMe Controllers 00:12:19.012 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:19.012 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:19.012 Initialization complete. Launching workers. 00:12:19.012 ======================================================== 00:12:19.012 Latency(us) 00:12:19.012 Device Information : IOPS MiB/s Average min max 00:12:19.012 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16062.30 62.74 7974.51 5985.96 8978.50 00:12:19.012 ======================================================== 00:12:19.012 Total : 16062.30 62.74 7974.51 5985.96 8978.50 00:12:19.012 00:12:19.012 [2024-11-06 13:55:57.835570] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:19.012 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:19.012 [2024-11-06 13:55:58.027422] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:24.283 [2024-11-06 13:56:03.087411] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:24.283 Initializing NVMe Controllers 00:12:24.283 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:24.283 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:24.283 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:24.283 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:24.283 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:24.283 Initialization complete. Launching workers. 00:12:24.283 Starting thread on core 2 00:12:24.283 Starting thread on core 3 00:12:24.283 Starting thread on core 1 00:12:24.283 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:24.283 [2024-11-06 13:56:03.328317] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:27.572 [2024-11-06 13:56:06.378307] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:27.572 Initializing NVMe Controllers 00:12:27.572 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:27.572 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:27.572 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:27.572 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:27.572 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:27.572 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:27.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:27.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:27.572 Initialization complete. Launching workers. 00:12:27.572 Starting thread on core 1 with urgent priority queue 00:12:27.572 Starting thread on core 2 with urgent priority queue 00:12:27.572 Starting thread on core 3 with urgent priority queue 00:12:27.572 Starting thread on core 0 with urgent priority queue 00:12:27.573 SPDK bdev Controller (SPDK1 ) core 0: 7685.33 IO/s 13.01 secs/100000 ios 00:12:27.573 SPDK bdev Controller (SPDK1 ) core 1: 8999.67 IO/s 11.11 secs/100000 ios 00:12:27.573 SPDK bdev Controller (SPDK1 ) core 2: 10749.67 IO/s 9.30 secs/100000 ios 00:12:27.573 SPDK bdev Controller (SPDK1 ) core 3: 8338.33 IO/s 11.99 secs/100000 ios 00:12:27.573 ======================================================== 00:12:27.573 00:12:27.573 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:27.573 [2024-11-06 13:56:06.610651] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:27.573 Initializing NVMe Controllers 00:12:27.573 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:27.573 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:27.573 Namespace ID: 1 size: 0GB 00:12:27.573 Initialization complete. 00:12:27.573 INFO: using host memory buffer for IO 00:12:27.573 Hello world! 00:12:27.573 [2024-11-06 13:56:06.643829] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:27.573 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:27.831 [2024-11-06 13:56:06.872097] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:28.769 Initializing NVMe Controllers 00:12:28.769 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:28.769 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:28.769 Initialization complete. Launching workers. 00:12:28.769 submit (in ns) avg, min, max = 5529.9, 2835.8, 4073658.3 00:12:28.769 complete (in ns) avg, min, max = 16638.3, 1635.0, 4014122.5 00:12:28.769 00:12:28.769 Submit histogram 00:12:28.769 ================ 00:12:28.769 Range in us Cumulative Count 00:12:28.769 2.827 - 2.840: 0.0244% ( 5) 00:12:28.769 2.840 - 2.853: 0.2928% ( 55) 00:12:28.769 2.853 - 2.867: 1.2885% ( 204) 00:12:28.769 2.867 - 2.880: 3.8460% ( 524) 00:12:28.769 2.880 - 2.893: 7.5602% ( 761) 00:12:28.769 2.893 - 2.907: 12.4262% ( 997) 00:12:28.769 2.907 - 2.920: 18.3367% ( 1211) 00:12:28.769 2.920 - 2.933: 23.9592% ( 1152) 00:12:28.769 2.933 - 2.947: 30.4895% ( 1338) 00:12:28.769 2.947 - 2.960: 36.7221% ( 1277) 00:12:28.769 2.960 - 2.973: 43.0133% ( 1289) 00:12:28.769 2.973 - 2.987: 49.8560% ( 1402) 00:12:28.769 2.987 - 3.000: 57.1380% ( 1492) 00:12:28.769 3.000 - 3.013: 65.7133% ( 1757) 00:12:28.769 3.013 - 3.027: 74.5864% ( 1818) 00:12:28.769 3.027 - 3.040: 82.1368% ( 1547) 00:12:28.769 3.040 - 3.053: 88.7647% ( 1358) 00:12:28.769 3.053 - 3.067: 93.1280% ( 894) 00:12:28.769 3.067 - 3.080: 95.7587% ( 539) 00:12:28.769 3.080 - 3.093: 97.4230% ( 341) 00:12:28.769 3.093 - 3.107: 98.5114% ( 223) 00:12:28.769 3.107 - 3.120: 99.1264% ( 126) 00:12:28.769 3.120 - 3.133: 99.3655% ( 49) 00:12:28.769 3.133 - 3.147: 99.5168% ( 31) 00:12:28.769 3.147 - 3.160: 99.5559% ( 8) 00:12:28.769 3.160 - 3.173: 99.5803% ( 5) 00:12:28.769 3.200 - 3.213: 99.5851% ( 1) 00:12:28.769 3.267 - 3.280: 99.5900% ( 1) 00:12:28.769 3.627 - 3.653: 99.5949% ( 1) 00:12:28.769 3.733 - 3.760: 99.5998% ( 1) 00:12:28.769 4.133 - 4.160: 99.6047% ( 1) 00:12:28.769 4.187 - 4.213: 99.6144% ( 2) 00:12:28.769 4.293 - 4.320: 99.6193% ( 1) 00:12:28.769 4.373 - 4.400: 99.6242% ( 1) 00:12:28.769 4.560 - 4.587: 99.6339% ( 2) 00:12:28.769 4.613 - 4.640: 99.6388% ( 1) 00:12:28.769 4.640 - 4.667: 99.6437% ( 1) 00:12:28.769 4.693 - 4.720: 99.6535% ( 2) 00:12:28.769 4.720 - 4.747: 99.6584% ( 1) 00:12:28.769 4.747 - 4.773: 99.6681% ( 2) 00:12:28.769 4.773 - 4.800: 99.6730% ( 1) 00:12:28.769 4.800 - 4.827: 99.6876% ( 3) 00:12:28.769 4.907 - 4.933: 99.6925% ( 1) 00:12:28.769 4.933 - 4.960: 99.6974% ( 1) 00:12:28.769 4.960 - 4.987: 99.7023% ( 1) 00:12:28.769 5.013 - 5.040: 99.7072% ( 1) 00:12:28.769 5.040 - 5.067: 99.7120% ( 1) 00:12:28.769 5.067 - 5.093: 99.7169% ( 1) 00:12:28.769 5.093 - 5.120: 99.7267% ( 2) 00:12:28.769 5.120 - 5.147: 99.7316% ( 1) 00:12:28.769 5.173 - 5.200: 99.7364% ( 1) 00:12:28.769 5.253 - 5.280: 99.7413% ( 1) 00:12:28.769 5.360 - 5.387: 99.7462% ( 1) 00:12:28.769 5.493 - 5.520: 99.7511% ( 1) 00:12:28.769 5.547 - 5.573: 99.7560% ( 1) 00:12:28.769 5.600 - 5.627: 99.7657% ( 2) 00:12:28.769 5.760 - 5.787: 99.7706% ( 1) 00:12:28.769 5.920 - 5.947: 99.7755% ( 1) 00:12:28.769 5.973 - 6.000: 99.7804% ( 1) 00:12:28.769 6.000 - 6.027: 99.7853% ( 1) 00:12:28.769 6.027 - 6.053: 99.7950% ( 2) 00:12:28.769 6.053 - 6.080: 99.7999% ( 1) 00:12:28.769 6.133 - 6.160: 99.8048% ( 1) 00:12:28.769 6.187 - 6.213: 99.8097% ( 1) 00:12:28.769 6.213 - 6.240: 99.8145% ( 1) 00:12:28.769 6.240 - 6.267: 99.8194% ( 1) 00:12:28.769 6.320 - 6.347: 99.8292% ( 2) 00:12:28.769 6.453 - 6.480: 99.8341% ( 1) 00:12:28.769 6.533 - 6.560: 99.8389% ( 1) 00:12:28.769 6.613 - 6.640: 99.8438% ( 1) 00:12:28.769 6.747 - 6.773: 99.8536% ( 2) 00:12:28.769 6.800 - 6.827: 99.8585% ( 1) 00:12:28.769 6.827 - 6.880: 99.8682% ( 2) 00:12:28.769 6.880 - 6.933: 99.8780% ( 2) 00:12:28.769 6.987 - 7.040: 99.8877% ( 2) 00:12:28.769 7.040 - 7.093: 99.8975% ( 2) 00:12:28.769 7.147 - 7.200: 99.9073% ( 2) 00:12:28.769 7.360 - 7.413: 99.9121% ( 1) 00:12:28.769 7.413 - 7.467: 99.9170% ( 1) 00:12:28.769 [2024-11-06 13:56:07.890737] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:28.769 7.840 - 7.893: 99.9219% ( 1) 00:12:28.769 8.587 - 8.640: 99.9268% ( 1) 00:12:28.769 8.640 - 8.693: 99.9317% ( 1) 00:12:28.769 17.067 - 17.173: 99.9366% ( 1) 00:12:28.769 3986.773 - 4014.080: 99.9951% ( 12) 00:12:28.769 4068.693 - 4096.000: 100.0000% ( 1) 00:12:28.769 00:12:28.769 Complete histogram 00:12:28.769 ================== 00:12:28.769 Range in us Cumulative Count 00:12:28.769 1.633 - 1.640: 0.4588% ( 94) 00:12:28.769 1.640 - 1.647: 0.8492% ( 80) 00:12:28.769 1.647 - 1.653: 0.9176% ( 14) 00:12:28.769 1.653 - 1.660: 1.0689% ( 31) 00:12:28.769 1.660 - 1.667: 1.1616% ( 19) 00:12:28.769 1.667 - 1.673: 1.1909% ( 6) 00:12:28.769 1.673 - 1.680: 1.2202% ( 6) 00:12:28.769 1.680 - 1.687: 1.3959% ( 36) 00:12:28.769 1.687 - 1.693: 44.5800% ( 8848) 00:12:28.769 1.693 - 1.700: 53.2383% ( 1774) 00:12:28.769 1.700 - 1.707: 57.8310% ( 941) 00:12:28.769 1.707 - 1.720: 76.2800% ( 3780) 00:12:28.769 1.720 - 1.733: 82.5565% ( 1286) 00:12:28.769 1.733 - 1.747: 83.4545% ( 184) 00:12:28.769 1.747 - 1.760: 88.2718% ( 987) 00:12:28.769 1.760 - 1.773: 93.9138% ( 1156) 00:12:28.769 1.773 - 1.787: 97.3156% ( 697) 00:12:28.769 1.787 - 1.800: 98.9702% ( 339) 00:12:28.769 1.800 - 1.813: 99.3997% ( 88) 00:12:28.769 1.813 - 1.827: 99.4826% ( 17) 00:12:28.769 1.867 - 1.880: 99.4924% ( 2) 00:12:28.769 2.013 - 2.027: 99.4973% ( 1) 00:12:28.769 3.680 - 3.707: 99.5022% ( 1) 00:12:28.770 3.787 - 3.813: 99.5071% ( 1) 00:12:28.770 3.867 - 3.893: 99.5119% ( 1) 00:12:28.770 4.027 - 4.053: 99.5168% ( 1) 00:12:28.770 4.613 - 4.640: 99.5217% ( 1) 00:12:28.770 4.800 - 4.827: 99.5266% ( 1) 00:12:28.770 4.827 - 4.853: 99.5315% ( 1) 00:12:28.770 5.093 - 5.120: 99.5363% ( 1) 00:12:28.770 5.147 - 5.173: 99.5412% ( 1) 00:12:28.770 5.173 - 5.200: 99.5510% ( 2) 00:12:28.770 5.227 - 5.253: 99.5559% ( 1) 00:12:28.770 5.253 - 5.280: 99.5656% ( 2) 00:12:28.770 5.413 - 5.440: 99.5754% ( 2) 00:12:28.770 5.600 - 5.627: 99.5803% ( 1) 00:12:28.770 5.627 - 5.653: 99.5851% ( 1) 00:12:28.770 5.893 - 5.920: 99.5900% ( 1) 00:12:28.770 6.000 - 6.027: 99.5949% ( 1) 00:12:28.770 6.133 - 6.160: 99.5998% ( 1) 00:12:28.770 6.293 - 6.320: 99.6047% ( 1) 00:12:28.770 6.747 - 6.773: 99.6095% ( 1) 00:12:28.770 8.373 - 8.427: 99.6144% ( 1) 00:12:28.770 12.000 - 12.053: 99.6193% ( 1) 00:12:28.770 16.853 - 16.960: 99.6242% ( 1) 00:12:28.770 2116.267 - 2129.920: 99.6291% ( 1) 00:12:28.770 3986.773 - 4014.080: 99.9951% ( 75) 00:12:28.770 4014.080 - 4041.387: 100.0000% ( 1) 00:12:28.770 00:12:28.770 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:28.770 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:28.770 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:28.770 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:28.770 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:29.029 [ 00:12:29.029 { 00:12:29.029 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:29.029 "subtype": "Discovery", 00:12:29.029 "listen_addresses": [], 00:12:29.029 "allow_any_host": true, 00:12:29.029 "hosts": [] 00:12:29.029 }, 00:12:29.029 { 00:12:29.029 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:29.029 "subtype": "NVMe", 00:12:29.029 "listen_addresses": [ 00:12:29.029 { 00:12:29.029 "trtype": "VFIOUSER", 00:12:29.029 "adrfam": "IPv4", 00:12:29.029 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:29.029 "trsvcid": "0" 00:12:29.029 } 00:12:29.029 ], 00:12:29.029 "allow_any_host": true, 00:12:29.029 "hosts": [], 00:12:29.029 "serial_number": "SPDK1", 00:12:29.029 "model_number": "SPDK bdev Controller", 00:12:29.029 "max_namespaces": 32, 00:12:29.029 "min_cntlid": 1, 00:12:29.029 "max_cntlid": 65519, 00:12:29.029 "namespaces": [ 00:12:29.029 { 00:12:29.029 "nsid": 1, 00:12:29.029 "bdev_name": "Malloc1", 00:12:29.029 "name": "Malloc1", 00:12:29.029 "nguid": "426DF96D4A7840148E8C1EF238BE06F2", 00:12:29.029 "uuid": "426df96d-4a78-4014-8e8c-1ef238be06f2" 00:12:29.029 } 00:12:29.029 ] 00:12:29.029 }, 00:12:29.029 { 00:12:29.029 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:29.029 "subtype": "NVMe", 00:12:29.029 "listen_addresses": [ 00:12:29.029 { 00:12:29.029 "trtype": "VFIOUSER", 00:12:29.029 "adrfam": "IPv4", 00:12:29.029 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:29.029 "trsvcid": "0" 00:12:29.029 } 00:12:29.029 ], 00:12:29.029 "allow_any_host": true, 00:12:29.029 "hosts": [], 00:12:29.029 "serial_number": "SPDK2", 00:12:29.029 "model_number": "SPDK bdev Controller", 00:12:29.029 "max_namespaces": 32, 00:12:29.029 "min_cntlid": 1, 00:12:29.029 "max_cntlid": 65519, 00:12:29.029 "namespaces": [ 00:12:29.029 { 00:12:29.029 "nsid": 1, 00:12:29.029 "bdev_name": "Malloc2", 00:12:29.029 "name": "Malloc2", 00:12:29.029 "nguid": "E04F3FF381CB4F14AB392B70F35313D3", 00:12:29.029 "uuid": "e04f3ff3-81cb-4f14-ab39-2b70f35313d3" 00:12:29.029 } 00:12:29.029 ] 00:12:29.029 } 00:12:29.029 ] 00:12:29.029 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:29.029 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=803934 00:12:29.029 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:29.029 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:12:29.029 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:29.029 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:12:29.029 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # i=1 00:12:29.029 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # sleep 0.1 00:12:29.029 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:29.029 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:29.029 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:12:29.029 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # i=2 00:12:29.029 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # sleep 0.1 00:12:29.029 [2024-11-06 13:56:08.249651] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:29.029 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:29.029 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:29.029 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:12:29.029 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:29.029 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:29.288 Malloc3 00:12:29.288 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:29.547 [2024-11-06 13:56:08.631259] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:29.547 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:29.547 Asynchronous Event Request test 00:12:29.547 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:29.547 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:29.547 Registering asynchronous event callbacks... 00:12:29.547 Starting namespace attribute notice tests for all controllers... 00:12:29.547 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:29.547 aer_cb - Changed Namespace 00:12:29.547 Cleaning up... 00:12:29.547 [ 00:12:29.547 { 00:12:29.547 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:29.547 "subtype": "Discovery", 00:12:29.547 "listen_addresses": [], 00:12:29.547 "allow_any_host": true, 00:12:29.547 "hosts": [] 00:12:29.547 }, 00:12:29.547 { 00:12:29.547 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:29.547 "subtype": "NVMe", 00:12:29.547 "listen_addresses": [ 00:12:29.547 { 00:12:29.547 "trtype": "VFIOUSER", 00:12:29.547 "adrfam": "IPv4", 00:12:29.547 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:29.547 "trsvcid": "0" 00:12:29.547 } 00:12:29.547 ], 00:12:29.547 "allow_any_host": true, 00:12:29.547 "hosts": [], 00:12:29.547 "serial_number": "SPDK1", 00:12:29.547 "model_number": "SPDK bdev Controller", 00:12:29.547 "max_namespaces": 32, 00:12:29.547 "min_cntlid": 1, 00:12:29.547 "max_cntlid": 65519, 00:12:29.547 "namespaces": [ 00:12:29.547 { 00:12:29.547 "nsid": 1, 00:12:29.547 "bdev_name": "Malloc1", 00:12:29.547 "name": "Malloc1", 00:12:29.547 "nguid": "426DF96D4A7840148E8C1EF238BE06F2", 00:12:29.547 "uuid": "426df96d-4a78-4014-8e8c-1ef238be06f2" 00:12:29.547 }, 00:12:29.547 { 00:12:29.547 "nsid": 2, 00:12:29.547 "bdev_name": "Malloc3", 00:12:29.547 "name": "Malloc3", 00:12:29.547 "nguid": "CD713AAC20C847DF822D320E0314A8A1", 00:12:29.547 "uuid": "cd713aac-20c8-47df-822d-320e0314a8a1" 00:12:29.547 } 00:12:29.547 ] 00:12:29.547 }, 00:12:29.547 { 00:12:29.547 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:29.547 "subtype": "NVMe", 00:12:29.547 "listen_addresses": [ 00:12:29.547 { 00:12:29.547 "trtype": "VFIOUSER", 00:12:29.547 "adrfam": "IPv4", 00:12:29.547 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:29.547 "trsvcid": "0" 00:12:29.547 } 00:12:29.547 ], 00:12:29.547 "allow_any_host": true, 00:12:29.547 "hosts": [], 00:12:29.547 "serial_number": "SPDK2", 00:12:29.547 "model_number": "SPDK bdev Controller", 00:12:29.547 "max_namespaces": 32, 00:12:29.547 "min_cntlid": 1, 00:12:29.547 "max_cntlid": 65519, 00:12:29.547 "namespaces": [ 00:12:29.547 { 00:12:29.547 "nsid": 1, 00:12:29.547 "bdev_name": "Malloc2", 00:12:29.547 "name": "Malloc2", 00:12:29.547 "nguid": "E04F3FF381CB4F14AB392B70F35313D3", 00:12:29.547 "uuid": "e04f3ff3-81cb-4f14-ab39-2b70f35313d3" 00:12:29.547 } 00:12:29.547 ] 00:12:29.547 } 00:12:29.547 ] 00:12:29.547 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 803934 00:12:29.547 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:29.547 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:29.547 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:29.547 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:29.547 [2024-11-06 13:56:08.816334] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:12:29.547 [2024-11-06 13:56:08.816364] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid803955 ] 00:12:29.808 [2024-11-06 13:56:08.853474] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:29.808 [2024-11-06 13:56:08.862434] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:29.808 [2024-11-06 13:56:08.862454] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f07dc5ff000 00:12:29.808 [2024-11-06 13:56:08.863436] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:29.808 [2024-11-06 13:56:08.864440] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:29.808 [2024-11-06 13:56:08.865448] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:29.808 [2024-11-06 13:56:08.866455] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:29.808 [2024-11-06 13:56:08.867461] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:29.808 [2024-11-06 13:56:08.868467] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:29.808 [2024-11-06 13:56:08.869469] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:29.808 [2024-11-06 13:56:08.870472] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:29.808 [2024-11-06 13:56:08.871481] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:29.808 [2024-11-06 13:56:08.871488] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f07dc5f4000 00:12:29.808 [2024-11-06 13:56:08.872398] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:29.809 [2024-11-06 13:56:08.881774] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:29.809 [2024-11-06 13:56:08.881793] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:12:29.809 [2024-11-06 13:56:08.886859] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:29.809 [2024-11-06 13:56:08.886891] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:29.809 [2024-11-06 13:56:08.886949] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:12:29.809 [2024-11-06 13:56:08.886959] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:12:29.809 [2024-11-06 13:56:08.886963] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:12:29.809 [2024-11-06 13:56:08.887859] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:29.809 [2024-11-06 13:56:08.887866] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:12:29.809 [2024-11-06 13:56:08.887872] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:12:29.809 [2024-11-06 13:56:08.888862] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:29.809 [2024-11-06 13:56:08.888869] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:12:29.809 [2024-11-06 13:56:08.888875] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:12:29.809 [2024-11-06 13:56:08.889867] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:29.809 [2024-11-06 13:56:08.889874] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:29.809 [2024-11-06 13:56:08.890874] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:29.809 [2024-11-06 13:56:08.890881] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:12:29.809 [2024-11-06 13:56:08.890885] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:12:29.809 [2024-11-06 13:56:08.890889] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:29.809 [2024-11-06 13:56:08.890995] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:12:29.809 [2024-11-06 13:56:08.890999] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:29.809 [2024-11-06 13:56:08.891002] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:29.809 [2024-11-06 13:56:08.891880] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:29.809 [2024-11-06 13:56:08.892881] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:29.809 [2024-11-06 13:56:08.893885] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:29.809 [2024-11-06 13:56:08.894888] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:29.809 [2024-11-06 13:56:08.894918] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:29.809 [2024-11-06 13:56:08.895895] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:29.809 [2024-11-06 13:56:08.895903] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:29.809 [2024-11-06 13:56:08.895907] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:12:29.809 [2024-11-06 13:56:08.895921] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:12:29.809 [2024-11-06 13:56:08.895927] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:12:29.809 [2024-11-06 13:56:08.895936] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:29.809 [2024-11-06 13:56:08.895940] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:29.809 [2024-11-06 13:56:08.895942] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:29.809 [2024-11-06 13:56:08.895951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:29.809 [2024-11-06 13:56:08.903251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:29.809 [2024-11-06 13:56:08.903260] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:12:29.809 [2024-11-06 13:56:08.903264] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:12:29.809 [2024-11-06 13:56:08.903267] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:12:29.809 [2024-11-06 13:56:08.903273] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:29.809 [2024-11-06 13:56:08.903278] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:12:29.809 [2024-11-06 13:56:08.903281] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:12:29.809 [2024-11-06 13:56:08.903285] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:12:29.809 [2024-11-06 13:56:08.903291] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:12:29.809 [2024-11-06 13:56:08.903299] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:29.809 [2024-11-06 13:56:08.911251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:29.809 [2024-11-06 13:56:08.911262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:29.809 [2024-11-06 13:56:08.911268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:29.809 [2024-11-06 13:56:08.911274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:29.809 [2024-11-06 13:56:08.911279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:29.809 [2024-11-06 13:56:08.911283] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:12:29.809 [2024-11-06 13:56:08.911288] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:29.809 [2024-11-06 13:56:08.911294] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:29.809 [2024-11-06 13:56:08.919250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:29.809 [2024-11-06 13:56:08.919258] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:12:29.809 [2024-11-06 13:56:08.919262] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:29.809 [2024-11-06 13:56:08.919267] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:12:29.809 [2024-11-06 13:56:08.919271] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:12:29.809 [2024-11-06 13:56:08.919277] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:29.809 [2024-11-06 13:56:08.927250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:29.809 [2024-11-06 13:56:08.927297] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:12:29.809 [2024-11-06 13:56:08.927303] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:12:29.809 [2024-11-06 13:56:08.927308] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:29.809 [2024-11-06 13:56:08.927313] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:29.809 [2024-11-06 13:56:08.927316] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:29.809 [2024-11-06 13:56:08.927320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:29.809 [2024-11-06 13:56:08.935249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:29.809 [2024-11-06 13:56:08.935258] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:12:29.809 [2024-11-06 13:56:08.935266] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:12:29.809 [2024-11-06 13:56:08.935272] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:12:29.810 [2024-11-06 13:56:08.935277] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:29.810 [2024-11-06 13:56:08.935280] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:29.810 [2024-11-06 13:56:08.935282] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:29.810 [2024-11-06 13:56:08.935287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:29.810 [2024-11-06 13:56:08.943250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:29.810 [2024-11-06 13:56:08.943262] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:29.810 [2024-11-06 13:56:08.943268] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:29.810 [2024-11-06 13:56:08.943273] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:29.810 [2024-11-06 13:56:08.943276] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:29.810 [2024-11-06 13:56:08.943280] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:29.810 [2024-11-06 13:56:08.943285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:29.810 [2024-11-06 13:56:08.951253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:29.810 [2024-11-06 13:56:08.951260] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:29.810 [2024-11-06 13:56:08.951265] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:12:29.810 [2024-11-06 13:56:08.951271] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:12:29.810 [2024-11-06 13:56:08.951278] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:12:29.810 [2024-11-06 13:56:08.951281] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:29.810 [2024-11-06 13:56:08.951285] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:12:29.810 [2024-11-06 13:56:08.951288] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:12:29.810 [2024-11-06 13:56:08.951293] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:12:29.810 [2024-11-06 13:56:08.951297] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:12:29.810 [2024-11-06 13:56:08.951309] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:29.810 [2024-11-06 13:56:08.959250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:29.810 [2024-11-06 13:56:08.959261] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:29.810 [2024-11-06 13:56:08.967251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:29.810 [2024-11-06 13:56:08.967261] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:29.810 [2024-11-06 13:56:08.975250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:29.810 [2024-11-06 13:56:08.975260] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:29.810 [2024-11-06 13:56:08.983250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:29.810 [2024-11-06 13:56:08.983262] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:29.810 [2024-11-06 13:56:08.983266] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:29.810 [2024-11-06 13:56:08.983268] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:29.810 [2024-11-06 13:56:08.983271] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:29.810 [2024-11-06 13:56:08.983273] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:12:29.810 [2024-11-06 13:56:08.983278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:29.810 [2024-11-06 13:56:08.983283] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:29.810 [2024-11-06 13:56:08.983286] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:29.810 [2024-11-06 13:56:08.983289] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:29.810 [2024-11-06 13:56:08.983293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:29.810 [2024-11-06 13:56:08.983298] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:29.810 [2024-11-06 13:56:08.983301] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:29.810 [2024-11-06 13:56:08.983303] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:29.810 [2024-11-06 13:56:08.983308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:29.810 [2024-11-06 13:56:08.983313] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:29.810 [2024-11-06 13:56:08.983316] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:29.810 [2024-11-06 13:56:08.983318] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:29.810 [2024-11-06 13:56:08.983323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:29.810 [2024-11-06 13:56:08.991249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:29.810 [2024-11-06 13:56:08.991260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:29.810 [2024-11-06 13:56:08.991268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:29.810 [2024-11-06 13:56:08.991273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:29.810 ===================================================== 00:12:29.810 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:29.810 ===================================================== 00:12:29.810 Controller Capabilities/Features 00:12:29.810 ================================ 00:12:29.810 Vendor ID: 4e58 00:12:29.810 Subsystem Vendor ID: 4e58 00:12:29.810 Serial Number: SPDK2 00:12:29.810 Model Number: SPDK bdev Controller 00:12:29.810 Firmware Version: 25.01 00:12:29.810 Recommended Arb Burst: 6 00:12:29.810 IEEE OUI Identifier: 8d 6b 50 00:12:29.810 Multi-path I/O 00:12:29.810 May have multiple subsystem ports: Yes 00:12:29.810 May have multiple controllers: Yes 00:12:29.810 Associated with SR-IOV VF: No 00:12:29.810 Max Data Transfer Size: 131072 00:12:29.810 Max Number of Namespaces: 32 00:12:29.810 Max Number of I/O Queues: 127 00:12:29.810 NVMe Specification Version (VS): 1.3 00:12:29.810 NVMe Specification Version (Identify): 1.3 00:12:29.810 Maximum Queue Entries: 256 00:12:29.810 Contiguous Queues Required: Yes 00:12:29.810 Arbitration Mechanisms Supported 00:12:29.810 Weighted Round Robin: Not Supported 00:12:29.810 Vendor Specific: Not Supported 00:12:29.810 Reset Timeout: 15000 ms 00:12:29.810 Doorbell Stride: 4 bytes 00:12:29.810 NVM Subsystem Reset: Not Supported 00:12:29.810 Command Sets Supported 00:12:29.810 NVM Command Set: Supported 00:12:29.810 Boot Partition: Not Supported 00:12:29.810 Memory Page Size Minimum: 4096 bytes 00:12:29.810 Memory Page Size Maximum: 4096 bytes 00:12:29.810 Persistent Memory Region: Not Supported 00:12:29.810 Optional Asynchronous Events Supported 00:12:29.810 Namespace Attribute Notices: Supported 00:12:29.810 Firmware Activation Notices: Not Supported 00:12:29.810 ANA Change Notices: Not Supported 00:12:29.810 PLE Aggregate Log Change Notices: Not Supported 00:12:29.810 LBA Status Info Alert Notices: Not Supported 00:12:29.810 EGE Aggregate Log Change Notices: Not Supported 00:12:29.810 Normal NVM Subsystem Shutdown event: Not Supported 00:12:29.810 Zone Descriptor Change Notices: Not Supported 00:12:29.810 Discovery Log Change Notices: Not Supported 00:12:29.810 Controller Attributes 00:12:29.810 128-bit Host Identifier: Supported 00:12:29.810 Non-Operational Permissive Mode: Not Supported 00:12:29.810 NVM Sets: Not Supported 00:12:29.810 Read Recovery Levels: Not Supported 00:12:29.810 Endurance Groups: Not Supported 00:12:29.810 Predictable Latency Mode: Not Supported 00:12:29.810 Traffic Based Keep ALive: Not Supported 00:12:29.810 Namespace Granularity: Not Supported 00:12:29.810 SQ Associations: Not Supported 00:12:29.810 UUID List: Not Supported 00:12:29.810 Multi-Domain Subsystem: Not Supported 00:12:29.810 Fixed Capacity Management: Not Supported 00:12:29.811 Variable Capacity Management: Not Supported 00:12:29.811 Delete Endurance Group: Not Supported 00:12:29.811 Delete NVM Set: Not Supported 00:12:29.811 Extended LBA Formats Supported: Not Supported 00:12:29.811 Flexible Data Placement Supported: Not Supported 00:12:29.811 00:12:29.811 Controller Memory Buffer Support 00:12:29.811 ================================ 00:12:29.811 Supported: No 00:12:29.811 00:12:29.811 Persistent Memory Region Support 00:12:29.811 ================================ 00:12:29.811 Supported: No 00:12:29.811 00:12:29.811 Admin Command Set Attributes 00:12:29.811 ============================ 00:12:29.811 Security Send/Receive: Not Supported 00:12:29.811 Format NVM: Not Supported 00:12:29.811 Firmware Activate/Download: Not Supported 00:12:29.811 Namespace Management: Not Supported 00:12:29.811 Device Self-Test: Not Supported 00:12:29.811 Directives: Not Supported 00:12:29.811 NVMe-MI: Not Supported 00:12:29.811 Virtualization Management: Not Supported 00:12:29.811 Doorbell Buffer Config: Not Supported 00:12:29.811 Get LBA Status Capability: Not Supported 00:12:29.811 Command & Feature Lockdown Capability: Not Supported 00:12:29.811 Abort Command Limit: 4 00:12:29.811 Async Event Request Limit: 4 00:12:29.811 Number of Firmware Slots: N/A 00:12:29.811 Firmware Slot 1 Read-Only: N/A 00:12:29.811 Firmware Activation Without Reset: N/A 00:12:29.811 Multiple Update Detection Support: N/A 00:12:29.811 Firmware Update Granularity: No Information Provided 00:12:29.811 Per-Namespace SMART Log: No 00:12:29.811 Asymmetric Namespace Access Log Page: Not Supported 00:12:29.811 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:29.811 Command Effects Log Page: Supported 00:12:29.811 Get Log Page Extended Data: Supported 00:12:29.811 Telemetry Log Pages: Not Supported 00:12:29.811 Persistent Event Log Pages: Not Supported 00:12:29.811 Supported Log Pages Log Page: May Support 00:12:29.811 Commands Supported & Effects Log Page: Not Supported 00:12:29.811 Feature Identifiers & Effects Log Page:May Support 00:12:29.811 NVMe-MI Commands & Effects Log Page: May Support 00:12:29.811 Data Area 4 for Telemetry Log: Not Supported 00:12:29.811 Error Log Page Entries Supported: 128 00:12:29.811 Keep Alive: Supported 00:12:29.811 Keep Alive Granularity: 10000 ms 00:12:29.811 00:12:29.811 NVM Command Set Attributes 00:12:29.811 ========================== 00:12:29.811 Submission Queue Entry Size 00:12:29.811 Max: 64 00:12:29.811 Min: 64 00:12:29.811 Completion Queue Entry Size 00:12:29.811 Max: 16 00:12:29.811 Min: 16 00:12:29.811 Number of Namespaces: 32 00:12:29.811 Compare Command: Supported 00:12:29.811 Write Uncorrectable Command: Not Supported 00:12:29.811 Dataset Management Command: Supported 00:12:29.811 Write Zeroes Command: Supported 00:12:29.811 Set Features Save Field: Not Supported 00:12:29.811 Reservations: Not Supported 00:12:29.811 Timestamp: Not Supported 00:12:29.811 Copy: Supported 00:12:29.811 Volatile Write Cache: Present 00:12:29.811 Atomic Write Unit (Normal): 1 00:12:29.811 Atomic Write Unit (PFail): 1 00:12:29.811 Atomic Compare & Write Unit: 1 00:12:29.811 Fused Compare & Write: Supported 00:12:29.811 Scatter-Gather List 00:12:29.811 SGL Command Set: Supported (Dword aligned) 00:12:29.811 SGL Keyed: Not Supported 00:12:29.811 SGL Bit Bucket Descriptor: Not Supported 00:12:29.811 SGL Metadata Pointer: Not Supported 00:12:29.811 Oversized SGL: Not Supported 00:12:29.811 SGL Metadata Address: Not Supported 00:12:29.811 SGL Offset: Not Supported 00:12:29.811 Transport SGL Data Block: Not Supported 00:12:29.811 Replay Protected Memory Block: Not Supported 00:12:29.811 00:12:29.811 Firmware Slot Information 00:12:29.811 ========================= 00:12:29.811 Active slot: 1 00:12:29.811 Slot 1 Firmware Revision: 25.01 00:12:29.811 00:12:29.811 00:12:29.811 Commands Supported and Effects 00:12:29.811 ============================== 00:12:29.811 Admin Commands 00:12:29.811 -------------- 00:12:29.811 Get Log Page (02h): Supported 00:12:29.811 Identify (06h): Supported 00:12:29.811 Abort (08h): Supported 00:12:29.811 Set Features (09h): Supported 00:12:29.811 Get Features (0Ah): Supported 00:12:29.811 Asynchronous Event Request (0Ch): Supported 00:12:29.811 Keep Alive (18h): Supported 00:12:29.811 I/O Commands 00:12:29.811 ------------ 00:12:29.811 Flush (00h): Supported LBA-Change 00:12:29.811 Write (01h): Supported LBA-Change 00:12:29.811 Read (02h): Supported 00:12:29.811 Compare (05h): Supported 00:12:29.811 Write Zeroes (08h): Supported LBA-Change 00:12:29.811 Dataset Management (09h): Supported LBA-Change 00:12:29.811 Copy (19h): Supported LBA-Change 00:12:29.811 00:12:29.811 Error Log 00:12:29.811 ========= 00:12:29.811 00:12:29.811 Arbitration 00:12:29.811 =========== 00:12:29.811 Arbitration Burst: 1 00:12:29.811 00:12:29.811 Power Management 00:12:29.811 ================ 00:12:29.811 Number of Power States: 1 00:12:29.811 Current Power State: Power State #0 00:12:29.811 Power State #0: 00:12:29.811 Max Power: 0.00 W 00:12:29.811 Non-Operational State: Operational 00:12:29.811 Entry Latency: Not Reported 00:12:29.811 Exit Latency: Not Reported 00:12:29.811 Relative Read Throughput: 0 00:12:29.811 Relative Read Latency: 0 00:12:29.811 Relative Write Throughput: 0 00:12:29.811 Relative Write Latency: 0 00:12:29.811 Idle Power: Not Reported 00:12:29.811 Active Power: Not Reported 00:12:29.811 Non-Operational Permissive Mode: Not Supported 00:12:29.811 00:12:29.811 Health Information 00:12:29.811 ================== 00:12:29.811 Critical Warnings: 00:12:29.811 Available Spare Space: OK 00:12:29.811 Temperature: OK 00:12:29.811 Device Reliability: OK 00:12:29.811 Read Only: No 00:12:29.811 Volatile Memory Backup: OK 00:12:29.811 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:29.811 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:29.811 Available Spare: 0% 00:12:29.811 Available Sp[2024-11-06 13:56:08.991347] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:29.811 [2024-11-06 13:56:08.999251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:29.811 [2024-11-06 13:56:08.999275] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:12:29.811 [2024-11-06 13:56:08.999282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:29.811 [2024-11-06 13:56:08.999287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:29.811 [2024-11-06 13:56:08.999291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:29.811 [2024-11-06 13:56:08.999296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:29.811 [2024-11-06 13:56:08.999327] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:29.811 [2024-11-06 13:56:08.999335] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:29.811 [2024-11-06 13:56:09.000331] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:29.811 [2024-11-06 13:56:09.000367] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:12:29.811 [2024-11-06 13:56:09.000372] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:12:29.811 [2024-11-06 13:56:09.001332] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:29.811 [2024-11-06 13:56:09.001342] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:12:29.811 [2024-11-06 13:56:09.001384] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:29.811 [2024-11-06 13:56:09.002350] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:29.811 are Threshold: 0% 00:12:29.812 Life Percentage Used: 0% 00:12:29.812 Data Units Read: 0 00:12:29.812 Data Units Written: 0 00:12:29.812 Host Read Commands: 0 00:12:29.812 Host Write Commands: 0 00:12:29.812 Controller Busy Time: 0 minutes 00:12:29.812 Power Cycles: 0 00:12:29.812 Power On Hours: 0 hours 00:12:29.812 Unsafe Shutdowns: 0 00:12:29.812 Unrecoverable Media Errors: 0 00:12:29.812 Lifetime Error Log Entries: 0 00:12:29.812 Warning Temperature Time: 0 minutes 00:12:29.812 Critical Temperature Time: 0 minutes 00:12:29.812 00:12:29.812 Number of Queues 00:12:29.812 ================ 00:12:29.812 Number of I/O Submission Queues: 127 00:12:29.812 Number of I/O Completion Queues: 127 00:12:29.812 00:12:29.812 Active Namespaces 00:12:29.812 ================= 00:12:29.812 Namespace ID:1 00:12:29.812 Error Recovery Timeout: Unlimited 00:12:29.812 Command Set Identifier: NVM (00h) 00:12:29.812 Deallocate: Supported 00:12:29.812 Deallocated/Unwritten Error: Not Supported 00:12:29.812 Deallocated Read Value: Unknown 00:12:29.812 Deallocate in Write Zeroes: Not Supported 00:12:29.812 Deallocated Guard Field: 0xFFFF 00:12:29.812 Flush: Supported 00:12:29.812 Reservation: Supported 00:12:29.812 Namespace Sharing Capabilities: Multiple Controllers 00:12:29.812 Size (in LBAs): 131072 (0GiB) 00:12:29.812 Capacity (in LBAs): 131072 (0GiB) 00:12:29.812 Utilization (in LBAs): 131072 (0GiB) 00:12:29.812 NGUID: E04F3FF381CB4F14AB392B70F35313D3 00:12:29.812 UUID: e04f3ff3-81cb-4f14-ab39-2b70f35313d3 00:12:29.812 Thin Provisioning: Not Supported 00:12:29.812 Per-NS Atomic Units: Yes 00:12:29.812 Atomic Boundary Size (Normal): 0 00:12:29.812 Atomic Boundary Size (PFail): 0 00:12:29.812 Atomic Boundary Offset: 0 00:12:29.812 Maximum Single Source Range Length: 65535 00:12:29.812 Maximum Copy Length: 65535 00:12:29.812 Maximum Source Range Count: 1 00:12:29.812 NGUID/EUI64 Never Reused: No 00:12:29.812 Namespace Write Protected: No 00:12:29.812 Number of LBA Formats: 1 00:12:29.812 Current LBA Format: LBA Format #00 00:12:29.812 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:29.812 00:12:29.812 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:30.071 [2024-11-06 13:56:09.174572] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:35.369 Initializing NVMe Controllers 00:12:35.369 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:35.369 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:35.369 Initialization complete. Launching workers. 00:12:35.369 ======================================================== 00:12:35.369 Latency(us) 00:12:35.369 Device Information : IOPS MiB/s Average min max 00:12:35.369 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39962.30 156.10 3202.70 841.78 6829.80 00:12:35.369 ======================================================== 00:12:35.370 Total : 39962.30 156.10 3202.70 841.78 6829.80 00:12:35.370 00:12:35.370 [2024-11-06 13:56:14.278439] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:35.370 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:35.370 [2024-11-06 13:56:14.457952] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:40.641 Initializing NVMe Controllers 00:12:40.641 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:40.641 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:40.641 Initialization complete. Launching workers. 00:12:40.641 ======================================================== 00:12:40.641 Latency(us) 00:12:40.641 Device Information : IOPS MiB/s Average min max 00:12:40.641 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39978.46 156.17 3201.60 853.13 7683.45 00:12:40.641 ======================================================== 00:12:40.641 Total : 39978.46 156.17 3201.60 853.13 7683.45 00:12:40.641 00:12:40.641 [2024-11-06 13:56:19.479201] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:40.641 13:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:40.641 [2024-11-06 13:56:19.675379] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:45.985 [2024-11-06 13:56:24.811327] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:45.985 Initializing NVMe Controllers 00:12:45.985 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:45.985 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:45.985 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:45.985 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:45.985 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:45.985 Initialization complete. Launching workers. 00:12:45.985 Starting thread on core 2 00:12:45.985 Starting thread on core 3 00:12:45.985 Starting thread on core 1 00:12:45.985 13:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:45.985 [2024-11-06 13:56:25.050386] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:49.275 [2024-11-06 13:56:28.099217] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:49.275 Initializing NVMe Controllers 00:12:49.275 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:49.276 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:49.276 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:49.276 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:49.276 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:49.276 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:49.276 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:49.276 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:49.276 Initialization complete. Launching workers. 00:12:49.276 Starting thread on core 1 with urgent priority queue 00:12:49.276 Starting thread on core 2 with urgent priority queue 00:12:49.276 Starting thread on core 3 with urgent priority queue 00:12:49.276 Starting thread on core 0 with urgent priority queue 00:12:49.276 SPDK bdev Controller (SPDK2 ) core 0: 8826.33 IO/s 11.33 secs/100000 ios 00:12:49.276 SPDK bdev Controller (SPDK2 ) core 1: 15814.67 IO/s 6.32 secs/100000 ios 00:12:49.276 SPDK bdev Controller (SPDK2 ) core 2: 7665.00 IO/s 13.05 secs/100000 ios 00:12:49.276 SPDK bdev Controller (SPDK2 ) core 3: 15240.00 IO/s 6.56 secs/100000 ios 00:12:49.276 ======================================================== 00:12:49.276 00:12:49.276 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:49.276 [2024-11-06 13:56:28.334638] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:49.276 Initializing NVMe Controllers 00:12:49.276 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:49.276 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:49.276 Namespace ID: 1 size: 0GB 00:12:49.276 Initialization complete. 00:12:49.276 INFO: using host memory buffer for IO 00:12:49.276 Hello world! 00:12:49.276 [2024-11-06 13:56:28.346719] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:49.276 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:49.536 [2024-11-06 13:56:28.575924] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:50.475 Initializing NVMe Controllers 00:12:50.475 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:50.475 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:50.475 Initialization complete. Launching workers. 00:12:50.475 submit (in ns) avg, min, max = 5123.0, 2823.3, 3999872.5 00:12:50.475 complete (in ns) avg, min, max = 15577.3, 1665.8, 4993644.2 00:12:50.475 00:12:50.475 Submit histogram 00:12:50.475 ================ 00:12:50.475 Range in us Cumulative Count 00:12:50.475 2.813 - 2.827: 0.0243% ( 5) 00:12:50.475 2.827 - 2.840: 0.7107% ( 141) 00:12:50.475 2.840 - 2.853: 1.7280% ( 209) 00:12:50.475 2.853 - 2.867: 3.7773% ( 421) 00:12:50.475 2.867 - 2.880: 7.1456% ( 692) 00:12:50.475 2.880 - 2.893: 10.7817% ( 747) 00:12:50.475 2.893 - 2.907: 14.8364% ( 833) 00:12:50.475 2.907 - 2.920: 21.2665% ( 1321) 00:12:50.475 2.920 - 2.933: 27.3949% ( 1259) 00:12:50.475 2.933 - 2.947: 33.1435% ( 1181) 00:12:50.475 2.947 - 2.960: 38.9895% ( 1201) 00:12:50.475 2.960 - 2.973: 46.1935% ( 1480) 00:12:50.475 2.973 - 2.987: 53.9866% ( 1601) 00:12:50.475 2.987 - 3.000: 62.3491% ( 1718) 00:12:50.475 3.000 - 3.013: 71.3542% ( 1850) 00:12:50.475 3.013 - 3.027: 79.1423% ( 1600) 00:12:50.475 3.027 - 3.040: 86.4145% ( 1494) 00:12:50.475 3.040 - 3.053: 91.7445% ( 1095) 00:12:50.475 3.053 - 3.067: 94.9523% ( 659) 00:12:50.475 3.067 - 3.080: 96.6706% ( 353) 00:12:50.475 3.080 - 3.093: 97.7560% ( 223) 00:12:50.475 3.093 - 3.107: 98.5884% ( 171) 00:12:50.475 3.107 - 3.120: 99.0654% ( 98) 00:12:50.475 3.120 - 3.133: 99.4013% ( 69) 00:12:50.475 3.133 - 3.147: 99.5084% ( 22) 00:12:50.475 3.147 - 3.160: 99.5717% ( 13) 00:12:50.475 3.160 - 3.173: 99.6009% ( 6) 00:12:50.475 3.173 - 3.187: 99.6106% ( 2) 00:12:50.475 3.187 - 3.200: 99.6155% ( 1) 00:12:50.475 3.213 - 3.227: 99.6203% ( 1) 00:12:50.475 3.333 - 3.347: 99.6252% ( 1) 00:12:50.475 3.347 - 3.360: 99.6349% ( 2) 00:12:50.475 3.440 - 3.467: 99.6398% ( 1) 00:12:50.475 3.467 - 3.493: 99.6447% ( 1) 00:12:50.475 3.573 - 3.600: 99.6495% ( 1) 00:12:50.475 3.920 - 3.947: 99.6544% ( 1) 00:12:50.475 4.480 - 4.507: 99.6641% ( 2) 00:12:50.475 4.640 - 4.667: 99.6690% ( 1) 00:12:50.475 4.693 - 4.720: 99.6739% ( 1) 00:12:50.475 4.933 - 4.960: 99.6836% ( 2) 00:12:50.475 5.013 - 5.040: 99.7031% ( 4) 00:12:50.475 5.067 - 5.093: 99.7128% ( 2) 00:12:50.475 5.120 - 5.147: 99.7225% ( 2) 00:12:50.475 5.147 - 5.173: 99.7274% ( 1) 00:12:50.475 5.333 - 5.360: 99.7323% ( 1) 00:12:50.475 5.573 - 5.600: 99.7371% ( 1) 00:12:50.475 5.760 - 5.787: 99.7420% ( 1) 00:12:50.475 5.787 - 5.813: 99.7469% ( 1) 00:12:50.475 5.840 - 5.867: 99.7615% ( 3) 00:12:50.475 5.947 - 5.973: 99.7664% ( 1) 00:12:50.475 6.000 - 6.027: 99.7712% ( 1) 00:12:50.475 6.053 - 6.080: 99.7761% ( 1) 00:12:50.475 6.107 - 6.133: 99.7810% ( 1) 00:12:50.475 6.133 - 6.160: 99.7858% ( 1) 00:12:50.475 6.160 - 6.187: 99.7907% ( 1) 00:12:50.475 6.187 - 6.213: 99.7956% ( 1) 00:12:50.475 6.213 - 6.240: 99.8004% ( 1) 00:12:50.475 6.293 - 6.320: 99.8053% ( 1) 00:12:50.475 6.373 - 6.400: 99.8102% ( 1) 00:12:50.475 6.400 - 6.427: 99.8150% ( 1) 00:12:50.475 6.507 - 6.533: 99.8296% ( 3) 00:12:50.475 6.560 - 6.587: 99.8345% ( 1) 00:12:50.475 6.613 - 6.640: 99.8491% ( 3) 00:12:50.475 6.640 - 6.667: 99.8540% ( 1) 00:12:50.475 6.747 - 6.773: 99.8637% ( 2) 00:12:50.475 6.773 - 6.800: 99.8734% ( 2) 00:12:50.475 6.880 - 6.933: 99.8783% ( 1) 00:12:50.475 6.987 - 7.040: 99.8832% ( 1) 00:12:50.475 7.093 - 7.147: 99.8880% ( 1) 00:12:50.475 7.147 - 7.200: 99.8978% ( 2) 00:12:50.475 7.253 - 7.307: 99.9075% ( 2) 00:12:50.475 7.360 - 7.413: 99.9124% ( 1) 00:12:50.475 7.413 - 7.467: 99.9173% ( 1) 00:12:50.475 7.467 - 7.520: 99.9221% ( 1) 00:12:50.475 7.627 - 7.680: 99.9270% ( 1) 00:12:50.475 7.680 - 7.733: 99.9319% ( 1) 00:12:50.475 7.787 - 7.840: 99.9367% ( 1) 00:12:50.475 8.000 - 8.053: 99.9416% ( 1) 00:12:50.475 13.280 - 13.333: 99.9465% ( 1) 00:12:50.475 [2024-11-06 13:56:29.668759] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:50.475 3986.773 - 4014.080: 100.0000% ( 11) 00:12:50.475 00:12:50.475 Complete histogram 00:12:50.475 ================== 00:12:50.475 Range in us Cumulative Count 00:12:50.475 1.660 - 1.667: 0.0049% ( 1) 00:12:50.475 1.667 - 1.673: 0.1266% ( 25) 00:12:50.475 1.673 - 1.680: 0.3505% ( 46) 00:12:50.475 1.680 - 1.687: 0.5014% ( 31) 00:12:50.475 1.687 - 1.693: 0.8129% ( 64) 00:12:50.475 1.693 - 1.700: 1.0855% ( 56) 00:12:50.475 1.700 - 1.707: 1.2364% ( 31) 00:12:50.475 1.707 - 1.720: 1.4311% ( 40) 00:12:50.475 1.720 - 1.733: 1.6209% ( 39) 00:12:50.475 1.733 - 1.747: 14.7829% ( 2704) 00:12:50.475 1.747 - 1.760: 44.1637% ( 6036) 00:12:50.475 1.760 - 1.773: 73.7977% ( 6088) 00:12:50.475 1.773 - 1.787: 89.6174% ( 3250) 00:12:50.476 1.787 - 1.800: 96.7192% ( 1459) 00:12:50.476 1.800 - 1.813: 98.8999% ( 448) 00:12:50.476 1.813 - 1.827: 99.3672% ( 96) 00:12:50.476 1.827 - 1.840: 99.4548% ( 18) 00:12:50.476 1.840 - 1.853: 99.4840% ( 6) 00:12:50.476 1.853 - 1.867: 99.4889% ( 1) 00:12:50.476 1.880 - 1.893: 99.4938% ( 1) 00:12:50.476 2.067 - 2.080: 99.4986% ( 1) 00:12:50.476 2.133 - 2.147: 99.5035% ( 1) 00:12:50.476 3.760 - 3.787: 99.5084% ( 1) 00:12:50.476 3.947 - 3.973: 99.5132% ( 1) 00:12:50.476 4.107 - 4.133: 99.5181% ( 1) 00:12:50.476 4.187 - 4.213: 99.5230% ( 1) 00:12:50.476 4.400 - 4.427: 99.5278% ( 1) 00:12:50.476 4.507 - 4.533: 99.5327% ( 1) 00:12:50.476 4.587 - 4.613: 99.5376% ( 1) 00:12:50.476 4.747 - 4.773: 99.5424% ( 1) 00:12:50.476 4.773 - 4.800: 99.5473% ( 1) 00:12:50.476 5.040 - 5.067: 99.5570% ( 2) 00:12:50.476 5.173 - 5.200: 99.5619% ( 1) 00:12:50.476 5.280 - 5.307: 99.5668% ( 1) 00:12:50.476 5.360 - 5.387: 99.5717% ( 1) 00:12:50.476 5.413 - 5.440: 99.5765% ( 1) 00:12:50.476 5.440 - 5.467: 99.5814% ( 1) 00:12:50.476 5.493 - 5.520: 99.5911% ( 2) 00:12:50.476 5.627 - 5.653: 99.5960% ( 1) 00:12:50.476 5.733 - 5.760: 99.6009% ( 1) 00:12:50.476 5.973 - 6.000: 99.6106% ( 2) 00:12:50.476 6.053 - 6.080: 99.6155% ( 1) 00:12:50.476 6.107 - 6.133: 99.6203% ( 1) 00:12:50.476 6.347 - 6.373: 99.6252% ( 1) 00:12:50.476 6.373 - 6.400: 99.6301% ( 1) 00:12:50.476 6.720 - 6.747: 99.6349% ( 1) 00:12:50.476 6.827 - 6.880: 99.6398% ( 1) 00:12:50.476 6.933 - 6.987: 99.6447% ( 1) 00:12:50.476 12.800 - 12.853: 99.6495% ( 1) 00:12:50.476 25.813 - 25.920: 99.6544% ( 1) 00:12:50.476 3003.733 - 3017.387: 99.6593% ( 1) 00:12:50.476 3986.773 - 4014.080: 99.9951% ( 69) 00:12:50.476 4969.813 - 4997.120: 100.0000% ( 1) 00:12:50.476 00:12:50.476 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:50.476 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:50.476 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:50.476 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:50.476 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:50.735 [ 00:12:50.735 { 00:12:50.735 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:50.735 "subtype": "Discovery", 00:12:50.735 "listen_addresses": [], 00:12:50.735 "allow_any_host": true, 00:12:50.735 "hosts": [] 00:12:50.735 }, 00:12:50.735 { 00:12:50.735 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:50.735 "subtype": "NVMe", 00:12:50.735 "listen_addresses": [ 00:12:50.735 { 00:12:50.735 "trtype": "VFIOUSER", 00:12:50.735 "adrfam": "IPv4", 00:12:50.735 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:50.735 "trsvcid": "0" 00:12:50.735 } 00:12:50.735 ], 00:12:50.735 "allow_any_host": true, 00:12:50.735 "hosts": [], 00:12:50.735 "serial_number": "SPDK1", 00:12:50.735 "model_number": "SPDK bdev Controller", 00:12:50.735 "max_namespaces": 32, 00:12:50.735 "min_cntlid": 1, 00:12:50.735 "max_cntlid": 65519, 00:12:50.735 "namespaces": [ 00:12:50.735 { 00:12:50.735 "nsid": 1, 00:12:50.735 "bdev_name": "Malloc1", 00:12:50.735 "name": "Malloc1", 00:12:50.735 "nguid": "426DF96D4A7840148E8C1EF238BE06F2", 00:12:50.735 "uuid": "426df96d-4a78-4014-8e8c-1ef238be06f2" 00:12:50.735 }, 00:12:50.735 { 00:12:50.735 "nsid": 2, 00:12:50.735 "bdev_name": "Malloc3", 00:12:50.735 "name": "Malloc3", 00:12:50.735 "nguid": "CD713AAC20C847DF822D320E0314A8A1", 00:12:50.735 "uuid": "cd713aac-20c8-47df-822d-320e0314a8a1" 00:12:50.735 } 00:12:50.735 ] 00:12:50.735 }, 00:12:50.735 { 00:12:50.735 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:50.735 "subtype": "NVMe", 00:12:50.735 "listen_addresses": [ 00:12:50.736 { 00:12:50.736 "trtype": "VFIOUSER", 00:12:50.736 "adrfam": "IPv4", 00:12:50.736 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:50.736 "trsvcid": "0" 00:12:50.736 } 00:12:50.736 ], 00:12:50.736 "allow_any_host": true, 00:12:50.736 "hosts": [], 00:12:50.736 "serial_number": "SPDK2", 00:12:50.736 "model_number": "SPDK bdev Controller", 00:12:50.736 "max_namespaces": 32, 00:12:50.736 "min_cntlid": 1, 00:12:50.736 "max_cntlid": 65519, 00:12:50.736 "namespaces": [ 00:12:50.736 { 00:12:50.736 "nsid": 1, 00:12:50.736 "bdev_name": "Malloc2", 00:12:50.736 "name": "Malloc2", 00:12:50.736 "nguid": "E04F3FF381CB4F14AB392B70F35313D3", 00:12:50.736 "uuid": "e04f3ff3-81cb-4f14-ab39-2b70f35313d3" 00:12:50.736 } 00:12:50.736 ] 00:12:50.736 } 00:12:50.736 ] 00:12:50.736 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:50.736 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=808643 00:12:50.736 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:50.736 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:50.736 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:12:50.736 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:50.736 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:12:50.736 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # i=1 00:12:50.736 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # sleep 0.1 00:12:50.736 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:50.736 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:12:50.736 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # i=2 00:12:50.736 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # sleep 0.1 00:12:50.995 [2024-11-06 13:56:30.021714] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:50.995 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:50.995 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:50.995 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:12:50.995 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:50.995 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:50.995 Malloc4 00:12:50.995 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:51.253 [2024-11-06 13:56:30.400309] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:51.253 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:51.253 Asynchronous Event Request test 00:12:51.253 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:51.253 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:51.253 Registering asynchronous event callbacks... 00:12:51.253 Starting namespace attribute notice tests for all controllers... 00:12:51.253 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:51.253 aer_cb - Changed Namespace 00:12:51.253 Cleaning up... 00:12:51.513 [ 00:12:51.514 { 00:12:51.514 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:51.514 "subtype": "Discovery", 00:12:51.514 "listen_addresses": [], 00:12:51.514 "allow_any_host": true, 00:12:51.514 "hosts": [] 00:12:51.514 }, 00:12:51.514 { 00:12:51.514 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:51.514 "subtype": "NVMe", 00:12:51.514 "listen_addresses": [ 00:12:51.514 { 00:12:51.514 "trtype": "VFIOUSER", 00:12:51.514 "adrfam": "IPv4", 00:12:51.514 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:51.514 "trsvcid": "0" 00:12:51.514 } 00:12:51.514 ], 00:12:51.514 "allow_any_host": true, 00:12:51.514 "hosts": [], 00:12:51.514 "serial_number": "SPDK1", 00:12:51.514 "model_number": "SPDK bdev Controller", 00:12:51.514 "max_namespaces": 32, 00:12:51.514 "min_cntlid": 1, 00:12:51.514 "max_cntlid": 65519, 00:12:51.514 "namespaces": [ 00:12:51.514 { 00:12:51.514 "nsid": 1, 00:12:51.514 "bdev_name": "Malloc1", 00:12:51.514 "name": "Malloc1", 00:12:51.514 "nguid": "426DF96D4A7840148E8C1EF238BE06F2", 00:12:51.514 "uuid": "426df96d-4a78-4014-8e8c-1ef238be06f2" 00:12:51.514 }, 00:12:51.514 { 00:12:51.514 "nsid": 2, 00:12:51.514 "bdev_name": "Malloc3", 00:12:51.514 "name": "Malloc3", 00:12:51.514 "nguid": "CD713AAC20C847DF822D320E0314A8A1", 00:12:51.514 "uuid": "cd713aac-20c8-47df-822d-320e0314a8a1" 00:12:51.514 } 00:12:51.514 ] 00:12:51.514 }, 00:12:51.514 { 00:12:51.514 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:51.514 "subtype": "NVMe", 00:12:51.514 "listen_addresses": [ 00:12:51.514 { 00:12:51.514 "trtype": "VFIOUSER", 00:12:51.514 "adrfam": "IPv4", 00:12:51.514 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:51.514 "trsvcid": "0" 00:12:51.514 } 00:12:51.514 ], 00:12:51.514 "allow_any_host": true, 00:12:51.514 "hosts": [], 00:12:51.514 "serial_number": "SPDK2", 00:12:51.514 "model_number": "SPDK bdev Controller", 00:12:51.514 "max_namespaces": 32, 00:12:51.514 "min_cntlid": 1, 00:12:51.514 "max_cntlid": 65519, 00:12:51.514 "namespaces": [ 00:12:51.514 { 00:12:51.514 "nsid": 1, 00:12:51.514 "bdev_name": "Malloc2", 00:12:51.514 "name": "Malloc2", 00:12:51.514 "nguid": "E04F3FF381CB4F14AB392B70F35313D3", 00:12:51.514 "uuid": "e04f3ff3-81cb-4f14-ab39-2b70f35313d3" 00:12:51.514 }, 00:12:51.514 { 00:12:51.514 "nsid": 2, 00:12:51.514 "bdev_name": "Malloc4", 00:12:51.514 "name": "Malloc4", 00:12:51.514 "nguid": "72FB27EA30694EF8B8AFC1AAEFA5611A", 00:12:51.514 "uuid": "72fb27ea-3069-4ef8-b8af-c1aaefa5611a" 00:12:51.514 } 00:12:51.514 ] 00:12:51.514 } 00:12:51.514 ] 00:12:51.514 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 808643 00:12:51.514 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:51.514 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 798573 00:12:51.514 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 798573 ']' 00:12:51.514 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 798573 00:12:51.514 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:12:51.514 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:51.514 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 798573 00:12:51.514 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:51.514 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:51.514 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 798573' 00:12:51.514 killing process with pid 798573 00:12:51.514 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 798573 00:12:51.514 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 798573 00:12:51.514 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:51.514 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:51.514 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:51.514 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:51.514 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:51.514 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=808972 00:12:51.514 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 808972' 00:12:51.514 Process pid: 808972 00:12:51.514 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:51.514 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 808972 00:12:51.514 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 808972 ']' 00:12:51.514 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.514 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:51.514 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.514 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:51.514 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:51.514 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:51.773 [2024-11-06 13:56:30.805787] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:51.773 [2024-11-06 13:56:30.806732] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:12:51.773 [2024-11-06 13:56:30.806770] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.773 [2024-11-06 13:56:30.873978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:51.773 [2024-11-06 13:56:30.902944] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.773 [2024-11-06 13:56:30.902972] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.773 [2024-11-06 13:56:30.902977] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.773 [2024-11-06 13:56:30.902982] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.773 [2024-11-06 13:56:30.902987] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.773 [2024-11-06 13:56:30.904168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.773 [2024-11-06 13:56:30.904337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.773 [2024-11-06 13:56:30.904644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.773 [2024-11-06 13:56:30.904645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:51.773 [2024-11-06 13:56:30.956683] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:51.773 [2024-11-06 13:56:30.957428] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:12:51.773 [2024-11-06 13:56:30.957549] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:12:51.773 [2024-11-06 13:56:30.957762] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:12:51.773 [2024-11-06 13:56:30.957868] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:12:51.773 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:51.773 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:12:51.773 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:52.711 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:52.971 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:52.971 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:52.971 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:52.971 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:52.971 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:53.230 Malloc1 00:12:53.230 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:53.230 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:53.489 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:53.748 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:53.748 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:53.749 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:53.749 Malloc2 00:12:53.749 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:54.008 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:54.268 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:54.268 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:54.268 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 808972 00:12:54.268 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 808972 ']' 00:12:54.268 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 808972 00:12:54.268 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:12:54.268 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:54.268 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 808972 00:12:54.268 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:54.268 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:54.268 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 808972' 00:12:54.268 killing process with pid 808972 00:12:54.268 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 808972 00:12:54.268 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 808972 00:12:54.528 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:54.528 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:54.528 00:12:54.528 real 0m49.734s 00:12:54.528 user 3m13.033s 00:12:54.528 sys 0m2.382s 00:12:54.528 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:54.528 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:54.528 ************************************ 00:12:54.528 END TEST nvmf_vfio_user 00:12:54.528 ************************************ 00:12:54.528 13:56:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:54.528 13:56:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:54.528 13:56:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:54.528 13:56:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:54.528 ************************************ 00:12:54.528 START TEST nvmf_vfio_user_nvme_compliance 00:12:54.528 ************************************ 00:12:54.528 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:54.528 * Looking for test storage... 00:12:54.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:54.528 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:54.528 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:12:54.528 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:54.528 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:54.528 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:54.528 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:54.528 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:54.528 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:12:54.528 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:12:54.528 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:12:54.528 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:12:54.528 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:12:54.528 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:12:54.528 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:12:54.528 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:54.528 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:12:54.528 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:54.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.529 --rc genhtml_branch_coverage=1 00:12:54.529 --rc genhtml_function_coverage=1 00:12:54.529 --rc genhtml_legend=1 00:12:54.529 --rc geninfo_all_blocks=1 00:12:54.529 --rc geninfo_unexecuted_blocks=1 00:12:54.529 00:12:54.529 ' 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:54.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.529 --rc genhtml_branch_coverage=1 00:12:54.529 --rc genhtml_function_coverage=1 00:12:54.529 --rc genhtml_legend=1 00:12:54.529 --rc geninfo_all_blocks=1 00:12:54.529 --rc geninfo_unexecuted_blocks=1 00:12:54.529 00:12:54.529 ' 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:54.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.529 --rc genhtml_branch_coverage=1 00:12:54.529 --rc genhtml_function_coverage=1 00:12:54.529 --rc genhtml_legend=1 00:12:54.529 --rc geninfo_all_blocks=1 00:12:54.529 --rc geninfo_unexecuted_blocks=1 00:12:54.529 00:12:54.529 ' 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:54.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.529 --rc genhtml_branch_coverage=1 00:12:54.529 --rc genhtml_function_coverage=1 00:12:54.529 --rc genhtml_legend=1 00:12:54.529 --rc geninfo_all_blocks=1 00:12:54.529 --rc geninfo_unexecuted_blocks=1 00:12:54.529 00:12:54.529 ' 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:54.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:54.529 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:54.790 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=809720 00:12:54.790 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 809720' 00:12:54.790 Process pid: 809720 00:12:54.790 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:54.790 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 809720 00:12:54.790 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # '[' -z 809720 ']' 00:12:54.790 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.790 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:54.790 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.790 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:54.790 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:54.790 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:54.790 [2024-11-06 13:56:33.848234] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:12:54.790 [2024-11-06 13:56:33.848289] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.790 [2024-11-06 13:56:33.914272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:54.790 [2024-11-06 13:56:33.943731] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.790 [2024-11-06 13:56:33.943760] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.790 [2024-11-06 13:56:33.943767] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.790 [2024-11-06 13:56:33.943772] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.790 [2024-11-06 13:56:33.943776] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.790 [2024-11-06 13:56:33.944865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.790 [2024-11-06 13:56:33.945012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.790 [2024-11-06 13:56:33.945015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.790 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:54.790 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@866 -- # return 0 00:12:54.790 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:12:56.169 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:56.169 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:56.169 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:56.169 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.169 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:56.169 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.169 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:56.169 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:56.169 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.169 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:56.169 malloc0 00:12:56.169 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.170 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:56.170 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.170 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:56.170 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.170 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:56.170 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.170 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:56.170 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.170 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:56.170 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.170 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:56.170 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.170 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:56.170 00:12:56.170 00:12:56.170 CUnit - A unit testing framework for C - Version 2.1-3 00:12:56.170 http://cunit.sourceforge.net/ 00:12:56.170 00:12:56.170 00:12:56.170 Suite: nvme_compliance 00:12:56.170 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-06 13:56:35.233688] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:56.170 [2024-11-06 13:56:35.234977] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:56.170 [2024-11-06 13:56:35.234989] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:56.170 [2024-11-06 13:56:35.234994] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:56.170 [2024-11-06 13:56:35.236705] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:56.170 passed 00:12:56.170 Test: admin_identify_ctrlr_verify_fused ...[2024-11-06 13:56:35.312201] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:56.170 [2024-11-06 13:56:35.315220] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:56.170 passed 00:12:56.170 Test: admin_identify_ns ...[2024-11-06 13:56:35.391589] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:56.428 [2024-11-06 13:56:35.454251] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:56.429 [2024-11-06 13:56:35.462253] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:56.429 [2024-11-06 13:56:35.483334] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:56.429 passed 00:12:56.429 Test: admin_get_features_mandatory_features ...[2024-11-06 13:56:35.556520] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:56.429 [2024-11-06 13:56:35.559542] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:56.429 passed 00:12:56.429 Test: admin_get_features_optional_features ...[2024-11-06 13:56:35.636031] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:56.429 [2024-11-06 13:56:35.639044] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:56.429 passed 00:12:56.688 Test: admin_set_features_number_of_queues ...[2024-11-06 13:56:35.715785] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:56.688 [2024-11-06 13:56:35.821338] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:56.688 passed 00:12:56.688 Test: admin_get_log_page_mandatory_logs ...[2024-11-06 13:56:35.896558] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:56.688 [2024-11-06 13:56:35.899572] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:56.688 passed 00:12:56.946 Test: admin_get_log_page_with_lpo ...[2024-11-06 13:56:35.972311] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:56.946 [2024-11-06 13:56:36.041379] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:56.946 [2024-11-06 13:56:36.055292] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:56.946 passed 00:12:56.946 Test: fabric_property_get ...[2024-11-06 13:56:36.128486] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:56.946 [2024-11-06 13:56:36.129686] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:56.946 [2024-11-06 13:56:36.131502] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:56.947 passed 00:12:56.947 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-06 13:56:36.207955] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:56.947 [2024-11-06 13:56:36.209159] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:56.947 [2024-11-06 13:56:36.210979] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:57.206 passed 00:12:57.206 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-06 13:56:36.287595] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:57.206 [2024-11-06 13:56:36.371252] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:57.206 [2024-11-06 13:56:36.387250] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:57.206 [2024-11-06 13:56:36.392315] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:57.206 passed 00:12:57.206 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-06 13:56:36.468351] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:57.206 [2024-11-06 13:56:36.469551] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:57.206 [2024-11-06 13:56:36.471370] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:57.465 passed 00:12:57.465 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-06 13:56:36.548613] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:57.465 [2024-11-06 13:56:36.625260] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:57.465 [2024-11-06 13:56:36.649252] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:57.465 [2024-11-06 13:56:36.654316] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:57.465 passed 00:12:57.465 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-06 13:56:36.725520] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:57.465 [2024-11-06 13:56:36.726718] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:12:57.465 [2024-11-06 13:56:36.726739] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:12:57.465 [2024-11-06 13:56:36.728540] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:57.724 passed 00:12:57.724 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-06 13:56:36.807281] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:57.724 [2024-11-06 13:56:36.900249] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:12:57.724 [2024-11-06 13:56:36.908249] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:12:57.724 [2024-11-06 13:56:36.916248] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:12:57.724 [2024-11-06 13:56:36.924253] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:12:57.724 [2024-11-06 13:56:36.953314] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:57.724 passed 00:12:57.984 Test: admin_create_io_sq_verify_pc ...[2024-11-06 13:56:37.026600] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:57.984 [2024-11-06 13:56:37.043257] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:12:57.984 [2024-11-06 13:56:37.060771] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:57.984 passed 00:12:57.984 Test: admin_create_io_qp_max_qps ...[2024-11-06 13:56:37.136218] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.362 [2024-11-06 13:56:38.240253] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:12:59.362 [2024-11-06 13:56:38.613376] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.362 passed 00:12:59.621 Test: admin_create_io_sq_shared_cq ...[2024-11-06 13:56:38.691611] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.621 [2024-11-06 13:56:38.823252] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:59.621 [2024-11-06 13:56:38.860298] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.621 passed 00:12:59.621 00:12:59.621 Run Summary: Type Total Ran Passed Failed Inactive 00:12:59.621 suites 1 1 n/a 0 0 00:12:59.621 tests 18 18 18 0 0 00:12:59.621 asserts 360 360 360 0 n/a 00:12:59.621 00:12:59.621 Elapsed time = 1.489 seconds 00:12:59.621 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 809720 00:12:59.621 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' -z 809720 ']' 00:12:59.621 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # kill -0 809720 00:12:59.621 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # uname 00:12:59.622 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:59.881 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 809720 00:12:59.881 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:59.881 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:59.881 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # echo 'killing process with pid 809720' 00:12:59.881 killing process with pid 809720 00:12:59.881 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # kill 809720 00:12:59.881 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@976 -- # wait 809720 00:12:59.881 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:12:59.881 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:59.881 00:12:59.881 real 0m5.388s 00:12:59.881 user 0m15.363s 00:12:59.881 sys 0m0.404s 00:12:59.881 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:59.881 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:59.881 ************************************ 00:12:59.881 END TEST nvmf_vfio_user_nvme_compliance 00:12:59.881 ************************************ 00:12:59.881 13:56:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:59.881 13:56:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:59.881 13:56:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:59.881 13:56:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:59.881 ************************************ 00:12:59.881 START TEST nvmf_vfio_user_fuzz 00:12:59.881 ************************************ 00:12:59.881 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:59.881 * Looking for test storage... 00:12:59.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:59.881 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:59.881 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:12:59.881 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:00.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.140 --rc genhtml_branch_coverage=1 00:13:00.140 --rc genhtml_function_coverage=1 00:13:00.140 --rc genhtml_legend=1 00:13:00.140 --rc geninfo_all_blocks=1 00:13:00.140 --rc geninfo_unexecuted_blocks=1 00:13:00.140 00:13:00.140 ' 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:00.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.140 --rc genhtml_branch_coverage=1 00:13:00.140 --rc genhtml_function_coverage=1 00:13:00.140 --rc genhtml_legend=1 00:13:00.140 --rc geninfo_all_blocks=1 00:13:00.140 --rc geninfo_unexecuted_blocks=1 00:13:00.140 00:13:00.140 ' 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:00.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.140 --rc genhtml_branch_coverage=1 00:13:00.140 --rc genhtml_function_coverage=1 00:13:00.140 --rc genhtml_legend=1 00:13:00.140 --rc geninfo_all_blocks=1 00:13:00.140 --rc geninfo_unexecuted_blocks=1 00:13:00.140 00:13:00.140 ' 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:00.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.140 --rc genhtml_branch_coverage=1 00:13:00.140 --rc genhtml_function_coverage=1 00:13:00.140 --rc genhtml_legend=1 00:13:00.140 --rc geninfo_all_blocks=1 00:13:00.140 --rc geninfo_unexecuted_blocks=1 00:13:00.140 00:13:00.140 ' 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.140 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:00.141 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=810804 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 810804' 00:13:00.141 Process pid: 810804 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 810804 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # '[' -z 810804 ']' 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:00.141 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:00.400 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:00.400 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@866 -- # return 0 00:13:00.400 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:01.339 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:01.339 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.339 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:01.339 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.339 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:01.339 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:01.339 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.339 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:01.339 malloc0 00:13:01.339 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.339 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:01.339 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.339 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:01.339 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.339 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:01.339 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.339 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:01.339 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.339 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:01.339 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.339 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:01.339 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.339 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:01.339 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:33.416 Fuzzing completed. Shutting down the fuzz application 00:13:33.416 00:13:33.416 Dumping successful admin opcodes: 00:13:33.416 8, 9, 10, 24, 00:13:33.416 Dumping successful io opcodes: 00:13:33.416 0, 00:13:33.416 NS: 0x20000081ef00 I/O qp, Total commands completed: 1399039, total successful commands: 5486, random_seed: 1734925312 00:13:33.416 NS: 0x20000081ef00 admin qp, Total commands completed: 346892, total successful commands: 2791, random_seed: 2472922816 00:13:33.416 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:33.416 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.417 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:33.417 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.417 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 810804 00:13:33.417 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' -z 810804 ']' 00:13:33.417 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # kill -0 810804 00:13:33.417 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # uname 00:13:33.417 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:33.417 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 810804 00:13:33.417 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:33.417 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:33.417 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 810804' 00:13:33.417 killing process with pid 810804 00:13:33.417 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # kill 810804 00:13:33.417 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@976 -- # wait 810804 00:13:33.417 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:33.417 00:13:33.417 real 0m31.927s 00:13:33.417 user 0m36.706s 00:13:33.417 sys 0m23.634s 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:33.417 ************************************ 00:13:33.417 END TEST nvmf_vfio_user_fuzz 00:13:33.417 ************************************ 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:33.417 ************************************ 00:13:33.417 START TEST nvmf_auth_target 00:13:33.417 ************************************ 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:33.417 * Looking for test storage... 00:13:33.417 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:33.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.417 --rc genhtml_branch_coverage=1 00:13:33.417 --rc genhtml_function_coverage=1 00:13:33.417 --rc genhtml_legend=1 00:13:33.417 --rc geninfo_all_blocks=1 00:13:33.417 --rc geninfo_unexecuted_blocks=1 00:13:33.417 00:13:33.417 ' 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:33.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.417 --rc genhtml_branch_coverage=1 00:13:33.417 --rc genhtml_function_coverage=1 00:13:33.417 --rc genhtml_legend=1 00:13:33.417 --rc geninfo_all_blocks=1 00:13:33.417 --rc geninfo_unexecuted_blocks=1 00:13:33.417 00:13:33.417 ' 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:33.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.417 --rc genhtml_branch_coverage=1 00:13:33.417 --rc genhtml_function_coverage=1 00:13:33.417 --rc genhtml_legend=1 00:13:33.417 --rc geninfo_all_blocks=1 00:13:33.417 --rc geninfo_unexecuted_blocks=1 00:13:33.417 00:13:33.417 ' 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:33.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.417 --rc genhtml_branch_coverage=1 00:13:33.417 --rc genhtml_function_coverage=1 00:13:33.417 --rc genhtml_legend=1 00:13:33.417 --rc geninfo_all_blocks=1 00:13:33.417 --rc geninfo_unexecuted_blocks=1 00:13:33.417 00:13:33.417 ' 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:33.417 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:33.418 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.418 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.418 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.418 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:33.418 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.418 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:13:33.418 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:33.418 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:33.418 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:33.418 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:33.418 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:33.418 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:33.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:33.418 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:33.418 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:33.418 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:33.418 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:33.418 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:33.418 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:33.418 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:33.418 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:33.418 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:33.418 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:33.418 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:13:33.418 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:33.418 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:33.418 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:33.418 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:33.418 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:33.418 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.418 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:33.418 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.418 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:33.418 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:33.418 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:13:33.418 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:37.611 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:37.611 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:37.611 Found net devices under 0000:31:00.0: cvl_0_0 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:37.611 Found net devices under 0000:31:00.1: cvl_0_1 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:37.611 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:37.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:37.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.410 ms 00:13:37.612 00:13:37.612 --- 10.0.0.2 ping statistics --- 00:13:37.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.612 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:37.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:37.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:13:37.612 00:13:37.612 --- 10.0.0.1 ping statistics --- 00:13:37.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.612 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=822303 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 822303 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 822303 ']' 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=822328 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=babd42982eb49d85ed9309253c31be11bfe37efb84f5e982 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.SJM 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key babd42982eb49d85ed9309253c31be11bfe37efb84f5e982 0 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 babd42982eb49d85ed9309253c31be11bfe37efb84f5e982 0 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=babd42982eb49d85ed9309253c31be11bfe37efb84f5e982 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.SJM 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.SJM 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.SJM 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c652eba12d6b385ba50d5c82c5b4f791774dad01ee95f0ba7d0acd504ba8541d 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.O57 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c652eba12d6b385ba50d5c82c5b4f791774dad01ee95f0ba7d0acd504ba8541d 3 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c652eba12d6b385ba50d5c82c5b4f791774dad01ee95f0ba7d0acd504ba8541d 3 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c652eba12d6b385ba50d5c82c5b4f791774dad01ee95f0ba7d0acd504ba8541d 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.O57 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.O57 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.O57 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f3090f9287379709c063154a16da38bf 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.MVa 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f3090f9287379709c063154a16da38bf 1 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f3090f9287379709c063154a16da38bf 1 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:37.612 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f3090f9287379709c063154a16da38bf 00:13:37.613 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:13:37.613 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:37.613 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.MVa 00:13:37.613 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.MVa 00:13:37.613 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.MVa 00:13:37.613 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:13:37.613 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:37.613 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:37.613 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:37.613 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:13:37.613 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:37.613 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:37.872 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a8ac89d7d632d971c0ae71d9252a83e0db7a988783929de1 00:13:37.872 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:13:37.872 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.iHE 00:13:37.872 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a8ac89d7d632d971c0ae71d9252a83e0db7a988783929de1 2 00:13:37.872 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a8ac89d7d632d971c0ae71d9252a83e0db7a988783929de1 2 00:13:37.872 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:37.872 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:37.872 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a8ac89d7d632d971c0ae71d9252a83e0db7a988783929de1 00:13:37.872 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:13:37.872 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:37.872 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.iHE 00:13:37.872 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.iHE 00:13:37.872 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.iHE 00:13:37.872 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:13:37.872 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:37.872 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:37.872 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:37.872 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:13:37.872 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:37.872 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:37.872 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bd0500604df340a981cf82d0f7938358072c4ff177cb1a87 00:13:37.872 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:13:37.872 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Pv6 00:13:37.872 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bd0500604df340a981cf82d0f7938358072c4ff177cb1a87 2 00:13:37.872 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bd0500604df340a981cf82d0f7938358072c4ff177cb1a87 2 00:13:37.872 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:37.872 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:37.873 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bd0500604df340a981cf82d0f7938358072c4ff177cb1a87 00:13:37.873 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:13:37.873 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:37.873 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Pv6 00:13:37.873 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Pv6 00:13:37.873 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Pv6 00:13:37.873 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:13:37.873 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:37.873 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:37.873 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:37.873 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:13:37.873 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:13:37.873 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:37.873 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4a19eee99ac1e46597757d06f06b571f 00:13:37.873 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:13:37.873 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Fw1 00:13:37.873 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4a19eee99ac1e46597757d06f06b571f 1 00:13:37.873 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4a19eee99ac1e46597757d06f06b571f 1 00:13:37.873 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:37.873 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:37.873 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4a19eee99ac1e46597757d06f06b571f 00:13:37.873 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:13:37.873 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:37.873 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Fw1 00:13:37.873 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Fw1 00:13:37.873 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Fw1 00:13:37.873 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:13:37.873 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:37.873 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:37.873 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:37.873 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:13:37.873 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:13:37.873 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:37.873 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=02eff631ef55de155ed94b340014f3eadae50f3de4d04f6bb6113cf2a5196a1f 00:13:37.873 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:13:37.873 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.WUo 00:13:37.873 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 02eff631ef55de155ed94b340014f3eadae50f3de4d04f6bb6113cf2a5196a1f 3 00:13:37.873 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 02eff631ef55de155ed94b340014f3eadae50f3de4d04f6bb6113cf2a5196a1f 3 00:13:37.873 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:37.873 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:37.873 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=02eff631ef55de155ed94b340014f3eadae50f3de4d04f6bb6113cf2a5196a1f 00:13:37.873 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:13:37.873 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:37.873 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.WUo 00:13:37.873 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.WUo 00:13:37.873 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.WUo 00:13:37.873 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:13:37.873 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 822303 00:13:37.873 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 822303 ']' 00:13:37.873 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.873 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:37.873 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.873 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:37.873 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.132 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:38.132 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:13:38.132 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 822328 /var/tmp/host.sock 00:13:38.132 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 822328 ']' 00:13:38.132 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:13:38.132 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:38.132 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:38.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:38.132 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:38.132 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.132 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:38.132 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:13:38.132 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:13:38.132 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.132 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.132 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.132 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:38.132 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.SJM 00:13:38.132 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.132 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.390 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.390 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.SJM 00:13:38.390 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.SJM 00:13:38.390 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.O57 ]] 00:13:38.390 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.O57 00:13:38.390 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.390 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.390 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.390 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.O57 00:13:38.390 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.O57 00:13:38.649 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:38.649 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.MVa 00:13:38.649 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.649 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.649 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.649 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.MVa 00:13:38.649 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.MVa 00:13:38.649 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.iHE ]] 00:13:38.649 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iHE 00:13:38.649 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.649 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.649 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.649 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iHE 00:13:38.649 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iHE 00:13:38.908 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:38.908 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Pv6 00:13:38.908 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.908 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.908 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.908 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Pv6 00:13:38.908 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Pv6 00:13:39.167 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Fw1 ]] 00:13:39.167 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Fw1 00:13:39.167 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.167 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.167 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.167 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Fw1 00:13:39.167 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Fw1 00:13:39.167 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:39.167 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.WUo 00:13:39.167 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.167 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.167 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.167 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.WUo 00:13:39.167 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.WUo 00:13:39.425 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:13:39.425 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:39.425 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:39.425 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:39.425 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:39.425 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:39.684 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:13:39.684 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:39.684 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:39.684 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:39.684 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:39.684 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:39.684 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.684 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.684 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.684 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.684 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.684 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.684 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.684 00:13:39.942 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:39.942 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:39.943 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.943 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.943 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.943 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.943 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.943 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.943 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:39.943 { 00:13:39.943 "cntlid": 1, 00:13:39.943 "qid": 0, 00:13:39.943 "state": "enabled", 00:13:39.943 "thread": "nvmf_tgt_poll_group_000", 00:13:39.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:13:39.943 "listen_address": { 00:13:39.943 "trtype": "TCP", 00:13:39.943 "adrfam": "IPv4", 00:13:39.943 "traddr": "10.0.0.2", 00:13:39.943 "trsvcid": "4420" 00:13:39.943 }, 00:13:39.943 "peer_address": { 00:13:39.943 "trtype": "TCP", 00:13:39.943 "adrfam": "IPv4", 00:13:39.943 "traddr": "10.0.0.1", 00:13:39.943 "trsvcid": "48850" 00:13:39.943 }, 00:13:39.943 "auth": { 00:13:39.943 "state": "completed", 00:13:39.943 "digest": "sha256", 00:13:39.943 "dhgroup": "null" 00:13:39.943 } 00:13:39.943 } 00:13:39.943 ]' 00:13:39.943 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:39.943 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:39.943 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:39.943 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:39.943 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:39.943 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.943 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.943 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.202 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:13:40.202 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:13:43.491 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:43.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:43.491 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:43.491 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.491 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.491 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.491 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:43.491 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:43.492 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:43.492 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:13:43.492 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:43.492 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:43.492 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:43.492 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:43.492 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:43.492 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:43.492 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.492 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.492 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.492 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:43.492 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:43.492 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:43.751 00:13:43.751 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:43.751 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.751 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:44.009 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:44.009 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:44.009 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.009 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.009 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.009 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:44.009 { 00:13:44.009 "cntlid": 3, 00:13:44.009 "qid": 0, 00:13:44.009 "state": "enabled", 00:13:44.010 "thread": "nvmf_tgt_poll_group_000", 00:13:44.010 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:13:44.010 "listen_address": { 00:13:44.010 "trtype": "TCP", 00:13:44.010 "adrfam": "IPv4", 00:13:44.010 "traddr": "10.0.0.2", 00:13:44.010 "trsvcid": "4420" 00:13:44.010 }, 00:13:44.010 "peer_address": { 00:13:44.010 "trtype": "TCP", 00:13:44.010 "adrfam": "IPv4", 00:13:44.010 "traddr": "10.0.0.1", 00:13:44.010 "trsvcid": "48870" 00:13:44.010 }, 00:13:44.010 "auth": { 00:13:44.010 "state": "completed", 00:13:44.010 "digest": "sha256", 00:13:44.010 "dhgroup": "null" 00:13:44.010 } 00:13:44.010 } 00:13:44.010 ]' 00:13:44.010 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:44.010 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:44.010 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:44.010 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:44.010 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:44.010 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:44.010 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:44.010 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:44.342 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: --dhchap-ctrl-secret DHHC-1:02:YThhYzg5ZDdkNjMyZDk3MWMwYWU3MWQ5MjUyYTgzZTBkYjdhOTg4NzgzOTI5ZGUxkSRd6A==: 00:13:44.342 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: --dhchap-ctrl-secret DHHC-1:02:YThhYzg5ZDdkNjMyZDk3MWMwYWU3MWQ5MjUyYTgzZTBkYjdhOTg4NzgzOTI5ZGUxkSRd6A==: 00:13:44.924 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.925 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:44.925 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.925 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.925 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.925 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:44.925 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:44.925 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:44.925 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:13:44.925 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:44.925 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:44.925 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:44.925 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:44.925 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:44.925 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:44.925 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.925 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.925 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.925 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:44.925 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:44.925 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:45.185 00:13:45.185 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:45.185 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:45.185 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:45.444 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.444 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:45.444 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.444 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.444 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.444 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:45.444 { 00:13:45.444 "cntlid": 5, 00:13:45.444 "qid": 0, 00:13:45.444 "state": "enabled", 00:13:45.444 "thread": "nvmf_tgt_poll_group_000", 00:13:45.444 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:13:45.444 "listen_address": { 00:13:45.444 "trtype": "TCP", 00:13:45.444 "adrfam": "IPv4", 00:13:45.444 "traddr": "10.0.0.2", 00:13:45.444 "trsvcid": "4420" 00:13:45.444 }, 00:13:45.444 "peer_address": { 00:13:45.444 "trtype": "TCP", 00:13:45.444 "adrfam": "IPv4", 00:13:45.444 "traddr": "10.0.0.1", 00:13:45.444 "trsvcid": "49446" 00:13:45.444 }, 00:13:45.444 "auth": { 00:13:45.444 "state": "completed", 00:13:45.444 "digest": "sha256", 00:13:45.444 "dhgroup": "null" 00:13:45.444 } 00:13:45.444 } 00:13:45.444 ]' 00:13:45.444 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:45.444 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:45.444 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:45.444 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:45.444 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:45.444 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:45.444 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.444 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.703 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:01:NGExOWVlZTk5YWMxZTQ2NTk3NzU3ZDA2ZjA2YjU3MWa7p1Hc: 00:13:45.703 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:01:NGExOWVlZTk5YWMxZTQ2NTk3NzU3ZDA2ZjA2YjU3MWa7p1Hc: 00:13:46.269 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:46.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:46.269 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:46.269 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.269 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.269 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.269 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:46.269 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:46.270 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:46.270 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:13:46.270 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:46.270 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:46.270 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:46.270 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:46.270 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:46.270 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:13:46.270 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.270 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.270 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.270 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:46.270 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:46.270 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:46.528 00:13:46.528 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:46.528 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.528 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:46.786 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.786 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:46.786 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.786 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.786 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.786 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:46.786 { 00:13:46.786 "cntlid": 7, 00:13:46.786 "qid": 0, 00:13:46.786 "state": "enabled", 00:13:46.786 "thread": "nvmf_tgt_poll_group_000", 00:13:46.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:13:46.786 "listen_address": { 00:13:46.786 "trtype": "TCP", 00:13:46.786 "adrfam": "IPv4", 00:13:46.786 "traddr": "10.0.0.2", 00:13:46.786 "trsvcid": "4420" 00:13:46.786 }, 00:13:46.786 "peer_address": { 00:13:46.786 "trtype": "TCP", 00:13:46.786 "adrfam": "IPv4", 00:13:46.786 "traddr": "10.0.0.1", 00:13:46.786 "trsvcid": "49466" 00:13:46.786 }, 00:13:46.786 "auth": { 00:13:46.787 "state": "completed", 00:13:46.787 "digest": "sha256", 00:13:46.787 "dhgroup": "null" 00:13:46.787 } 00:13:46.787 } 00:13:46.787 ]' 00:13:46.787 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:46.787 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:46.787 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:46.787 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:46.787 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:46.787 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:46.787 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:46.787 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.044 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:13:47.044 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:13:47.613 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:47.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:47.613 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:47.613 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.613 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.613 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.613 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:47.613 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:47.613 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:47.613 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:47.613 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:13:47.613 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:47.613 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:47.613 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:47.613 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:47.613 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:47.613 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.613 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.613 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.613 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.613 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.613 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.613 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.872 00:13:47.872 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:47.872 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:47.872 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:48.131 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.131 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:48.131 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.131 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.131 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.131 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:48.131 { 00:13:48.131 "cntlid": 9, 00:13:48.131 "qid": 0, 00:13:48.131 "state": "enabled", 00:13:48.131 "thread": "nvmf_tgt_poll_group_000", 00:13:48.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:13:48.131 "listen_address": { 00:13:48.131 "trtype": "TCP", 00:13:48.131 "adrfam": "IPv4", 00:13:48.131 "traddr": "10.0.0.2", 00:13:48.131 "trsvcid": "4420" 00:13:48.131 }, 00:13:48.131 "peer_address": { 00:13:48.131 "trtype": "TCP", 00:13:48.131 "adrfam": "IPv4", 00:13:48.131 "traddr": "10.0.0.1", 00:13:48.131 "trsvcid": "49502" 00:13:48.131 }, 00:13:48.131 "auth": { 00:13:48.131 "state": "completed", 00:13:48.131 "digest": "sha256", 00:13:48.131 "dhgroup": "ffdhe2048" 00:13:48.131 } 00:13:48.131 } 00:13:48.131 ]' 00:13:48.131 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:48.131 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:48.131 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:48.131 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:48.131 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:48.131 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:48.131 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:48.131 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:48.390 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:13:48.390 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:13:48.958 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:48.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:48.958 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:48.958 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.958 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.958 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.958 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:48.958 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:48.958 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:48.958 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:13:48.958 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:48.958 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:48.958 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:48.958 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:48.958 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:48.958 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.958 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.958 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.216 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.216 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.216 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.216 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.216 00:13:49.216 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:49.216 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:49.216 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.475 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.475 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:49.475 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.475 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.476 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.476 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:49.476 { 00:13:49.476 "cntlid": 11, 00:13:49.476 "qid": 0, 00:13:49.476 "state": "enabled", 00:13:49.476 "thread": "nvmf_tgt_poll_group_000", 00:13:49.476 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:13:49.476 "listen_address": { 00:13:49.476 "trtype": "TCP", 00:13:49.476 "adrfam": "IPv4", 00:13:49.476 "traddr": "10.0.0.2", 00:13:49.476 "trsvcid": "4420" 00:13:49.476 }, 00:13:49.476 "peer_address": { 00:13:49.476 "trtype": "TCP", 00:13:49.476 "adrfam": "IPv4", 00:13:49.476 "traddr": "10.0.0.1", 00:13:49.476 "trsvcid": "49532" 00:13:49.476 }, 00:13:49.476 "auth": { 00:13:49.476 "state": "completed", 00:13:49.476 "digest": "sha256", 00:13:49.476 "dhgroup": "ffdhe2048" 00:13:49.476 } 00:13:49.476 } 00:13:49.476 ]' 00:13:49.476 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:49.476 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:49.476 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:49.476 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:49.476 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:49.476 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:49.476 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.476 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:49.734 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: --dhchap-ctrl-secret DHHC-1:02:YThhYzg5ZDdkNjMyZDk3MWMwYWU3MWQ5MjUyYTgzZTBkYjdhOTg4NzgzOTI5ZGUxkSRd6A==: 00:13:49.735 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: --dhchap-ctrl-secret DHHC-1:02:YThhYzg5ZDdkNjMyZDk3MWMwYWU3MWQ5MjUyYTgzZTBkYjdhOTg4NzgzOTI5ZGUxkSRd6A==: 00:13:50.302 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.302 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:50.302 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.302 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.302 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.302 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:50.302 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:50.302 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:50.563 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:13:50.563 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:50.563 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:50.563 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:50.563 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:50.563 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:50.563 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.563 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.563 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.563 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.563 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.563 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.563 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.563 00:13:50.823 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:50.823 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:50.823 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:50.823 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:50.823 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:50.823 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.823 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.823 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.823 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:50.823 { 00:13:50.823 "cntlid": 13, 00:13:50.823 "qid": 0, 00:13:50.823 "state": "enabled", 00:13:50.823 "thread": "nvmf_tgt_poll_group_000", 00:13:50.823 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:13:50.823 "listen_address": { 00:13:50.823 "trtype": "TCP", 00:13:50.823 "adrfam": "IPv4", 00:13:50.823 "traddr": "10.0.0.2", 00:13:50.823 "trsvcid": "4420" 00:13:50.823 }, 00:13:50.823 "peer_address": { 00:13:50.823 "trtype": "TCP", 00:13:50.823 "adrfam": "IPv4", 00:13:50.823 "traddr": "10.0.0.1", 00:13:50.823 "trsvcid": "49566" 00:13:50.823 }, 00:13:50.823 "auth": { 00:13:50.823 "state": "completed", 00:13:50.823 "digest": "sha256", 00:13:50.823 "dhgroup": "ffdhe2048" 00:13:50.823 } 00:13:50.823 } 00:13:50.823 ]' 00:13:50.823 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:50.823 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:50.823 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:50.823 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:50.823 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:51.082 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:51.082 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:51.082 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:51.082 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:01:NGExOWVlZTk5YWMxZTQ2NTk3NzU3ZDA2ZjA2YjU3MWa7p1Hc: 00:13:51.082 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:01:NGExOWVlZTk5YWMxZTQ2NTk3NzU3ZDA2ZjA2YjU3MWa7p1Hc: 00:13:51.649 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:51.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:51.649 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:51.649 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.649 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.649 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.649 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:51.649 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:51.649 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:51.908 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:13:51.908 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:51.908 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:51.908 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:51.908 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:51.908 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:51.908 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:13:51.908 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.908 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.908 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.908 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:51.908 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:51.908 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:52.168 00:13:52.168 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:52.168 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:52.168 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:52.168 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:52.168 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:52.168 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.168 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.168 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.168 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:52.168 { 00:13:52.168 "cntlid": 15, 00:13:52.168 "qid": 0, 00:13:52.168 "state": "enabled", 00:13:52.168 "thread": "nvmf_tgt_poll_group_000", 00:13:52.168 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:13:52.168 "listen_address": { 00:13:52.168 "trtype": "TCP", 00:13:52.168 "adrfam": "IPv4", 00:13:52.168 "traddr": "10.0.0.2", 00:13:52.168 "trsvcid": "4420" 00:13:52.168 }, 00:13:52.168 "peer_address": { 00:13:52.168 "trtype": "TCP", 00:13:52.168 "adrfam": "IPv4", 00:13:52.168 "traddr": "10.0.0.1", 00:13:52.168 "trsvcid": "49604" 00:13:52.168 }, 00:13:52.168 "auth": { 00:13:52.168 "state": "completed", 00:13:52.168 "digest": "sha256", 00:13:52.168 "dhgroup": "ffdhe2048" 00:13:52.168 } 00:13:52.168 } 00:13:52.168 ]' 00:13:52.168 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:52.168 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:52.168 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:52.168 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:52.427 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:52.427 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:52.427 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:52.427 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:52.427 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:13:52.427 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:13:52.995 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.995 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:52.995 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.995 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.995 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.995 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:52.995 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:52.995 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:52.995 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:53.254 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:13:53.254 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:53.254 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:53.254 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:53.254 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:53.254 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:53.254 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:53.254 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.254 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.254 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.254 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:53.254 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:53.254 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:53.514 00:13:53.514 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:53.514 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.514 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:53.514 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.514 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:53.514 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.514 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.514 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.514 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:53.514 { 00:13:53.514 "cntlid": 17, 00:13:53.514 "qid": 0, 00:13:53.514 "state": "enabled", 00:13:53.514 "thread": "nvmf_tgt_poll_group_000", 00:13:53.514 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:13:53.514 "listen_address": { 00:13:53.514 "trtype": "TCP", 00:13:53.514 "adrfam": "IPv4", 00:13:53.514 "traddr": "10.0.0.2", 00:13:53.514 "trsvcid": "4420" 00:13:53.514 }, 00:13:53.514 "peer_address": { 00:13:53.514 "trtype": "TCP", 00:13:53.514 "adrfam": "IPv4", 00:13:53.514 "traddr": "10.0.0.1", 00:13:53.514 "trsvcid": "49634" 00:13:53.514 }, 00:13:53.514 "auth": { 00:13:53.514 "state": "completed", 00:13:53.514 "digest": "sha256", 00:13:53.514 "dhgroup": "ffdhe3072" 00:13:53.514 } 00:13:53.514 } 00:13:53.514 ]' 00:13:53.514 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:53.773 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:53.773 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:53.773 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:53.773 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:53.773 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:53.773 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.773 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.773 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:13:53.773 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:13:54.342 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:54.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:54.342 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:54.342 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.342 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.342 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.342 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:54.342 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:54.342 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:54.601 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:13:54.601 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:54.601 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:54.601 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:54.601 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:54.601 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.601 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:54.601 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.601 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.601 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.601 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:54.601 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:54.601 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:54.859 00:13:54.859 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:54.859 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.859 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:55.118 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.118 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.118 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.118 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.118 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.118 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:55.118 { 00:13:55.118 "cntlid": 19, 00:13:55.118 "qid": 0, 00:13:55.118 "state": "enabled", 00:13:55.118 "thread": "nvmf_tgt_poll_group_000", 00:13:55.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:13:55.118 "listen_address": { 00:13:55.118 "trtype": "TCP", 00:13:55.118 "adrfam": "IPv4", 00:13:55.118 "traddr": "10.0.0.2", 00:13:55.118 "trsvcid": "4420" 00:13:55.118 }, 00:13:55.118 "peer_address": { 00:13:55.118 "trtype": "TCP", 00:13:55.118 "adrfam": "IPv4", 00:13:55.118 "traddr": "10.0.0.1", 00:13:55.118 "trsvcid": "49662" 00:13:55.118 }, 00:13:55.118 "auth": { 00:13:55.118 "state": "completed", 00:13:55.118 "digest": "sha256", 00:13:55.118 "dhgroup": "ffdhe3072" 00:13:55.118 } 00:13:55.118 } 00:13:55.118 ]' 00:13:55.118 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:55.118 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:55.118 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:55.118 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:55.118 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:55.118 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.118 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.118 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.377 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: --dhchap-ctrl-secret DHHC-1:02:YThhYzg5ZDdkNjMyZDk3MWMwYWU3MWQ5MjUyYTgzZTBkYjdhOTg4NzgzOTI5ZGUxkSRd6A==: 00:13:55.377 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: --dhchap-ctrl-secret DHHC-1:02:YThhYzg5ZDdkNjMyZDk3MWMwYWU3MWQ5MjUyYTgzZTBkYjdhOTg4NzgzOTI5ZGUxkSRd6A==: 00:13:55.946 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:55.946 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:55.946 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:55.946 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.946 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.946 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.946 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:55.946 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:55.946 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:55.946 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:13:55.946 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:55.946 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:55.946 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:55.946 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:55.946 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:55.946 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:55.946 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.946 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.946 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.946 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:55.946 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:55.946 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:56.204 00:13:56.204 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:56.204 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.204 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:56.463 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.463 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:56.463 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.463 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.463 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.463 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:56.463 { 00:13:56.463 "cntlid": 21, 00:13:56.463 "qid": 0, 00:13:56.463 "state": "enabled", 00:13:56.463 "thread": "nvmf_tgt_poll_group_000", 00:13:56.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:13:56.463 "listen_address": { 00:13:56.463 "trtype": "TCP", 00:13:56.463 "adrfam": "IPv4", 00:13:56.463 "traddr": "10.0.0.2", 00:13:56.463 "trsvcid": "4420" 00:13:56.463 }, 00:13:56.463 "peer_address": { 00:13:56.463 "trtype": "TCP", 00:13:56.463 "adrfam": "IPv4", 00:13:56.463 "traddr": "10.0.0.1", 00:13:56.463 "trsvcid": "60940" 00:13:56.463 }, 00:13:56.463 "auth": { 00:13:56.463 "state": "completed", 00:13:56.463 "digest": "sha256", 00:13:56.463 "dhgroup": "ffdhe3072" 00:13:56.463 } 00:13:56.463 } 00:13:56.463 ]' 00:13:56.463 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:56.463 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:56.463 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:56.463 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:56.463 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:56.463 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:56.463 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:56.463 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:56.721 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:01:NGExOWVlZTk5YWMxZTQ2NTk3NzU3ZDA2ZjA2YjU3MWa7p1Hc: 00:13:56.721 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:01:NGExOWVlZTk5YWMxZTQ2NTk3NzU3ZDA2ZjA2YjU3MWa7p1Hc: 00:13:57.288 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:57.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:57.288 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:57.288 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.288 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.288 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.288 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:57.289 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:57.289 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:57.289 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:13:57.289 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:57.289 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:57.289 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:57.289 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:57.289 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:57.289 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:13:57.289 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.289 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.289 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.289 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:57.289 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:57.289 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:57.547 00:13:57.547 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:57.547 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:57.547 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.806 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.806 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.806 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.806 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.806 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.806 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:57.806 { 00:13:57.806 "cntlid": 23, 00:13:57.806 "qid": 0, 00:13:57.806 "state": "enabled", 00:13:57.806 "thread": "nvmf_tgt_poll_group_000", 00:13:57.806 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:13:57.806 "listen_address": { 00:13:57.806 "trtype": "TCP", 00:13:57.806 "adrfam": "IPv4", 00:13:57.806 "traddr": "10.0.0.2", 00:13:57.806 "trsvcid": "4420" 00:13:57.806 }, 00:13:57.806 "peer_address": { 00:13:57.806 "trtype": "TCP", 00:13:57.806 "adrfam": "IPv4", 00:13:57.806 "traddr": "10.0.0.1", 00:13:57.806 "trsvcid": "60958" 00:13:57.806 }, 00:13:57.806 "auth": { 00:13:57.806 "state": "completed", 00:13:57.806 "digest": "sha256", 00:13:57.806 "dhgroup": "ffdhe3072" 00:13:57.806 } 00:13:57.806 } 00:13:57.806 ]' 00:13:57.806 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:57.806 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:57.806 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:57.807 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:57.807 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:57.807 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.807 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.807 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:58.065 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:13:58.065 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:13:58.633 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:58.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:58.633 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:58.633 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.633 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.633 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.633 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:58.633 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:58.633 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:58.633 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:58.894 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:13:58.894 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:58.894 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:58.894 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:58.894 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:58.894 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.894 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:58.894 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.894 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.894 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.894 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:58.894 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:58.894 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:58.894 00:13:59.152 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:59.152 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:59.152 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.152 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.152 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.152 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.152 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.152 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.152 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:59.152 { 00:13:59.152 "cntlid": 25, 00:13:59.152 "qid": 0, 00:13:59.152 "state": "enabled", 00:13:59.152 "thread": "nvmf_tgt_poll_group_000", 00:13:59.152 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:13:59.152 "listen_address": { 00:13:59.152 "trtype": "TCP", 00:13:59.152 "adrfam": "IPv4", 00:13:59.152 "traddr": "10.0.0.2", 00:13:59.152 "trsvcid": "4420" 00:13:59.152 }, 00:13:59.152 "peer_address": { 00:13:59.152 "trtype": "TCP", 00:13:59.152 "adrfam": "IPv4", 00:13:59.152 "traddr": "10.0.0.1", 00:13:59.152 "trsvcid": "60996" 00:13:59.152 }, 00:13:59.152 "auth": { 00:13:59.152 "state": "completed", 00:13:59.152 "digest": "sha256", 00:13:59.152 "dhgroup": "ffdhe4096" 00:13:59.152 } 00:13:59.152 } 00:13:59.152 ]' 00:13:59.152 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:59.152 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:59.152 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:59.152 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:59.152 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:59.411 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.411 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.411 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.411 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:13:59.411 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:13:59.978 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:59.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:59.978 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:59.978 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.978 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.978 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.978 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:59.978 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:59.978 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:00.237 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:14:00.237 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:00.237 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:00.237 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:00.237 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:00.237 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:00.237 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:00.237 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.237 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.237 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.237 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:00.237 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:00.237 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:00.495 00:14:00.495 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:00.495 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:00.495 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:00.495 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.495 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:00.495 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.495 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.495 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.495 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:00.495 { 00:14:00.495 "cntlid": 27, 00:14:00.495 "qid": 0, 00:14:00.495 "state": "enabled", 00:14:00.495 "thread": "nvmf_tgt_poll_group_000", 00:14:00.495 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:00.495 "listen_address": { 00:14:00.495 "trtype": "TCP", 00:14:00.495 "adrfam": "IPv4", 00:14:00.495 "traddr": "10.0.0.2", 00:14:00.495 "trsvcid": "4420" 00:14:00.495 }, 00:14:00.495 "peer_address": { 00:14:00.495 "trtype": "TCP", 00:14:00.495 "adrfam": "IPv4", 00:14:00.495 "traddr": "10.0.0.1", 00:14:00.495 "trsvcid": "32780" 00:14:00.495 }, 00:14:00.495 "auth": { 00:14:00.495 "state": "completed", 00:14:00.495 "digest": "sha256", 00:14:00.495 "dhgroup": "ffdhe4096" 00:14:00.495 } 00:14:00.495 } 00:14:00.495 ]' 00:14:00.495 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:00.753 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:00.753 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:00.753 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:00.753 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:00.753 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:00.753 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:00.753 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.753 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: --dhchap-ctrl-secret DHHC-1:02:YThhYzg5ZDdkNjMyZDk3MWMwYWU3MWQ5MjUyYTgzZTBkYjdhOTg4NzgzOTI5ZGUxkSRd6A==: 00:14:00.753 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: --dhchap-ctrl-secret DHHC-1:02:YThhYzg5ZDdkNjMyZDk3MWMwYWU3MWQ5MjUyYTgzZTBkYjdhOTg4NzgzOTI5ZGUxkSRd6A==: 00:14:01.322 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:01.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:01.322 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:01.322 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.322 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.322 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.322 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:01.322 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:01.322 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:01.580 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:14:01.580 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:01.580 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:01.580 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:01.580 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:01.580 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.580 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:01.580 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.580 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.580 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.580 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:01.580 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:01.580 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:01.840 00:14:01.840 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:01.840 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:01.840 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:02.100 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.100 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:02.100 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.100 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.100 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.100 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:02.100 { 00:14:02.100 "cntlid": 29, 00:14:02.100 "qid": 0, 00:14:02.100 "state": "enabled", 00:14:02.100 "thread": "nvmf_tgt_poll_group_000", 00:14:02.100 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:02.100 "listen_address": { 00:14:02.100 "trtype": "TCP", 00:14:02.100 "adrfam": "IPv4", 00:14:02.100 "traddr": "10.0.0.2", 00:14:02.100 "trsvcid": "4420" 00:14:02.100 }, 00:14:02.100 "peer_address": { 00:14:02.100 "trtype": "TCP", 00:14:02.100 "adrfam": "IPv4", 00:14:02.100 "traddr": "10.0.0.1", 00:14:02.100 "trsvcid": "32808" 00:14:02.100 }, 00:14:02.100 "auth": { 00:14:02.100 "state": "completed", 00:14:02.100 "digest": "sha256", 00:14:02.100 "dhgroup": "ffdhe4096" 00:14:02.100 } 00:14:02.100 } 00:14:02.100 ]' 00:14:02.100 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:02.100 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:02.100 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:02.100 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:02.100 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:02.100 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:02.100 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.100 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.358 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:01:NGExOWVlZTk5YWMxZTQ2NTk3NzU3ZDA2ZjA2YjU3MWa7p1Hc: 00:14:02.358 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:01:NGExOWVlZTk5YWMxZTQ2NTk3NzU3ZDA2ZjA2YjU3MWa7p1Hc: 00:14:02.925 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.925 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:02.925 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.925 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.925 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.925 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:02.925 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:02.925 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:03.184 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:14:03.184 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:03.184 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:03.184 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:03.184 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:03.184 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:03.184 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:14:03.184 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.184 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.184 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.184 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:03.184 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:03.184 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:03.184 00:14:03.443 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:03.443 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.443 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:03.443 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.443 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:03.443 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.443 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.443 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.443 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:03.443 { 00:14:03.443 "cntlid": 31, 00:14:03.443 "qid": 0, 00:14:03.443 "state": "enabled", 00:14:03.443 "thread": "nvmf_tgt_poll_group_000", 00:14:03.443 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:03.443 "listen_address": { 00:14:03.443 "trtype": "TCP", 00:14:03.443 "adrfam": "IPv4", 00:14:03.443 "traddr": "10.0.0.2", 00:14:03.443 "trsvcid": "4420" 00:14:03.443 }, 00:14:03.443 "peer_address": { 00:14:03.443 "trtype": "TCP", 00:14:03.443 "adrfam": "IPv4", 00:14:03.443 "traddr": "10.0.0.1", 00:14:03.443 "trsvcid": "32842" 00:14:03.443 }, 00:14:03.443 "auth": { 00:14:03.443 "state": "completed", 00:14:03.443 "digest": "sha256", 00:14:03.443 "dhgroup": "ffdhe4096" 00:14:03.443 } 00:14:03.443 } 00:14:03.443 ]' 00:14:03.443 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:03.443 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:03.443 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:03.443 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:03.443 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:03.702 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:03.702 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:03.703 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:03.703 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:14:03.703 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:14:04.271 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.271 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:04.271 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.271 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.271 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.271 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:04.271 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:04.271 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:04.271 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:04.530 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:14:04.530 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:04.530 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:04.530 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:04.530 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:04.530 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:04.530 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.530 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.530 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.530 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.530 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.530 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.530 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.788 00:14:04.788 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:04.788 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:04.788 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.047 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.047 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:05.047 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.047 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.047 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.047 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:05.047 { 00:14:05.047 "cntlid": 33, 00:14:05.047 "qid": 0, 00:14:05.047 "state": "enabled", 00:14:05.047 "thread": "nvmf_tgt_poll_group_000", 00:14:05.047 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:05.047 "listen_address": { 00:14:05.047 "trtype": "TCP", 00:14:05.047 "adrfam": "IPv4", 00:14:05.047 "traddr": "10.0.0.2", 00:14:05.047 "trsvcid": "4420" 00:14:05.047 }, 00:14:05.047 "peer_address": { 00:14:05.047 "trtype": "TCP", 00:14:05.047 "adrfam": "IPv4", 00:14:05.047 "traddr": "10.0.0.1", 00:14:05.047 "trsvcid": "32864" 00:14:05.047 }, 00:14:05.047 "auth": { 00:14:05.047 "state": "completed", 00:14:05.047 "digest": "sha256", 00:14:05.047 "dhgroup": "ffdhe6144" 00:14:05.047 } 00:14:05.047 } 00:14:05.047 ]' 00:14:05.047 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:05.047 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:05.047 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:05.047 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:05.047 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:05.047 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.047 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.047 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.306 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:14:05.306 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:14:05.875 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:05.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:05.875 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:05.875 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.875 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.875 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.875 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:05.875 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:05.875 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:05.875 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:14:05.875 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:05.875 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:05.875 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:05.875 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:05.875 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.875 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.875 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.875 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.876 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.876 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.876 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.876 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:06.443 00:14:06.444 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:06.444 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:06.444 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.444 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.444 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.444 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.444 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.444 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.444 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:06.444 { 00:14:06.444 "cntlid": 35, 00:14:06.444 "qid": 0, 00:14:06.444 "state": "enabled", 00:14:06.444 "thread": "nvmf_tgt_poll_group_000", 00:14:06.444 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:06.444 "listen_address": { 00:14:06.444 "trtype": "TCP", 00:14:06.444 "adrfam": "IPv4", 00:14:06.444 "traddr": "10.0.0.2", 00:14:06.444 "trsvcid": "4420" 00:14:06.444 }, 00:14:06.444 "peer_address": { 00:14:06.444 "trtype": "TCP", 00:14:06.444 "adrfam": "IPv4", 00:14:06.444 "traddr": "10.0.0.1", 00:14:06.444 "trsvcid": "60570" 00:14:06.444 }, 00:14:06.444 "auth": { 00:14:06.444 "state": "completed", 00:14:06.444 "digest": "sha256", 00:14:06.444 "dhgroup": "ffdhe6144" 00:14:06.444 } 00:14:06.444 } 00:14:06.444 ]' 00:14:06.444 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:06.444 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:06.444 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:06.444 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:06.444 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:06.444 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.444 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.444 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.702 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: --dhchap-ctrl-secret DHHC-1:02:YThhYzg5ZDdkNjMyZDk3MWMwYWU3MWQ5MjUyYTgzZTBkYjdhOTg4NzgzOTI5ZGUxkSRd6A==: 00:14:06.702 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: --dhchap-ctrl-secret DHHC-1:02:YThhYzg5ZDdkNjMyZDk3MWMwYWU3MWQ5MjUyYTgzZTBkYjdhOTg4NzgzOTI5ZGUxkSRd6A==: 00:14:07.271 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.271 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:07.271 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.271 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.271 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.271 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:07.271 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:07.271 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:07.530 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:14:07.530 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:07.530 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:07.530 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:07.530 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:07.530 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:07.530 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.530 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.530 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.530 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.530 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.530 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.530 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.789 00:14:07.789 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:07.789 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:07.789 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.048 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:08.048 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:08.048 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.048 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.048 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.048 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:08.048 { 00:14:08.048 "cntlid": 37, 00:14:08.048 "qid": 0, 00:14:08.048 "state": "enabled", 00:14:08.048 "thread": "nvmf_tgt_poll_group_000", 00:14:08.048 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:08.048 "listen_address": { 00:14:08.048 "trtype": "TCP", 00:14:08.048 "adrfam": "IPv4", 00:14:08.048 "traddr": "10.0.0.2", 00:14:08.048 "trsvcid": "4420" 00:14:08.048 }, 00:14:08.048 "peer_address": { 00:14:08.048 "trtype": "TCP", 00:14:08.048 "adrfam": "IPv4", 00:14:08.048 "traddr": "10.0.0.1", 00:14:08.048 "trsvcid": "60608" 00:14:08.048 }, 00:14:08.048 "auth": { 00:14:08.048 "state": "completed", 00:14:08.048 "digest": "sha256", 00:14:08.048 "dhgroup": "ffdhe6144" 00:14:08.048 } 00:14:08.048 } 00:14:08.048 ]' 00:14:08.048 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:08.048 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:08.048 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:08.048 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:08.048 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:08.048 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:08.048 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.048 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.307 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:01:NGExOWVlZTk5YWMxZTQ2NTk3NzU3ZDA2ZjA2YjU3MWa7p1Hc: 00:14:08.307 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:01:NGExOWVlZTk5YWMxZTQ2NTk3NzU3ZDA2ZjA2YjU3MWa7p1Hc: 00:14:08.874 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:08.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:08.874 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:08.874 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.874 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.875 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.875 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:08.875 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:08.875 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:08.875 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:14:08.875 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:08.875 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:08.875 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:08.875 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:08.875 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.875 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:14:08.875 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.875 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.875 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.875 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:08.875 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:08.875 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:09.134 00:14:09.134 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:09.134 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:09.134 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.393 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.393 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.393 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.393 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.393 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.393 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:09.393 { 00:14:09.393 "cntlid": 39, 00:14:09.393 "qid": 0, 00:14:09.393 "state": "enabled", 00:14:09.393 "thread": "nvmf_tgt_poll_group_000", 00:14:09.393 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:09.393 "listen_address": { 00:14:09.393 "trtype": "TCP", 00:14:09.393 "adrfam": "IPv4", 00:14:09.393 "traddr": "10.0.0.2", 00:14:09.393 "trsvcid": "4420" 00:14:09.393 }, 00:14:09.393 "peer_address": { 00:14:09.393 "trtype": "TCP", 00:14:09.393 "adrfam": "IPv4", 00:14:09.393 "traddr": "10.0.0.1", 00:14:09.393 "trsvcid": "60648" 00:14:09.393 }, 00:14:09.393 "auth": { 00:14:09.393 "state": "completed", 00:14:09.393 "digest": "sha256", 00:14:09.393 "dhgroup": "ffdhe6144" 00:14:09.393 } 00:14:09.393 } 00:14:09.393 ]' 00:14:09.393 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:09.393 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:09.393 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:09.393 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:09.393 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:09.393 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.393 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.393 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.652 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:14:09.652 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:14:10.220 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.220 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:10.220 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.220 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.220 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.220 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:10.220 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:10.220 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:10.220 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:10.477 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:14:10.477 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:10.477 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:10.477 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:10.477 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:10.477 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.477 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.477 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.477 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.477 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.477 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.477 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.477 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.735 00:14:10.735 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:10.735 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:10.735 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.994 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.994 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.994 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.994 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.994 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.994 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:10.994 { 00:14:10.994 "cntlid": 41, 00:14:10.994 "qid": 0, 00:14:10.994 "state": "enabled", 00:14:10.994 "thread": "nvmf_tgt_poll_group_000", 00:14:10.994 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:10.994 "listen_address": { 00:14:10.994 "trtype": "TCP", 00:14:10.994 "adrfam": "IPv4", 00:14:10.994 "traddr": "10.0.0.2", 00:14:10.994 "trsvcid": "4420" 00:14:10.994 }, 00:14:10.994 "peer_address": { 00:14:10.994 "trtype": "TCP", 00:14:10.994 "adrfam": "IPv4", 00:14:10.994 "traddr": "10.0.0.1", 00:14:10.994 "trsvcid": "60670" 00:14:10.994 }, 00:14:10.994 "auth": { 00:14:10.994 "state": "completed", 00:14:10.994 "digest": "sha256", 00:14:10.994 "dhgroup": "ffdhe8192" 00:14:10.994 } 00:14:10.994 } 00:14:10.994 ]' 00:14:10.994 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:10.994 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:10.994 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:10.994 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:10.994 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:10.994 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:10.994 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:10.994 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.253 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:14:11.253 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:14:11.820 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:11.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:11.820 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:11.820 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.820 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.820 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.820 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:11.820 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:11.820 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:12.079 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:14:12.079 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:12.079 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:12.079 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:12.079 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:12.079 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.079 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.079 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.079 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.079 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.079 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.079 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.079 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.342 00:14:12.707 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:12.707 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:12.707 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:12.707 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:12.707 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:12.707 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.707 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.707 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.707 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:12.707 { 00:14:12.707 "cntlid": 43, 00:14:12.707 "qid": 0, 00:14:12.707 "state": "enabled", 00:14:12.707 "thread": "nvmf_tgt_poll_group_000", 00:14:12.707 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:12.707 "listen_address": { 00:14:12.707 "trtype": "TCP", 00:14:12.707 "adrfam": "IPv4", 00:14:12.707 "traddr": "10.0.0.2", 00:14:12.707 "trsvcid": "4420" 00:14:12.707 }, 00:14:12.707 "peer_address": { 00:14:12.707 "trtype": "TCP", 00:14:12.707 "adrfam": "IPv4", 00:14:12.707 "traddr": "10.0.0.1", 00:14:12.707 "trsvcid": "60698" 00:14:12.707 }, 00:14:12.707 "auth": { 00:14:12.707 "state": "completed", 00:14:12.707 "digest": "sha256", 00:14:12.707 "dhgroup": "ffdhe8192" 00:14:12.707 } 00:14:12.707 } 00:14:12.707 ]' 00:14:12.707 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:12.707 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:12.707 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:12.707 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:12.708 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:12.708 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:12.708 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:12.708 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:12.978 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: --dhchap-ctrl-secret DHHC-1:02:YThhYzg5ZDdkNjMyZDk3MWMwYWU3MWQ5MjUyYTgzZTBkYjdhOTg4NzgzOTI5ZGUxkSRd6A==: 00:14:12.978 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: --dhchap-ctrl-secret DHHC-1:02:YThhYzg5ZDdkNjMyZDk3MWMwYWU3MWQ5MjUyYTgzZTBkYjdhOTg4NzgzOTI5ZGUxkSRd6A==: 00:14:13.547 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:13.548 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:13.548 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.548 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.548 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.548 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:13.548 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:13.548 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:13.548 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:14:13.548 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:13.548 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:13.548 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:13.548 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:13.548 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.548 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:13.548 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.548 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.548 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.548 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:13.548 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:13.548 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.117 00:14:14.117 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:14.117 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.117 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:14.376 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.376 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.376 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.376 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.376 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.376 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:14.376 { 00:14:14.376 "cntlid": 45, 00:14:14.376 "qid": 0, 00:14:14.376 "state": "enabled", 00:14:14.376 "thread": "nvmf_tgt_poll_group_000", 00:14:14.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:14.376 "listen_address": { 00:14:14.376 "trtype": "TCP", 00:14:14.376 "adrfam": "IPv4", 00:14:14.376 "traddr": "10.0.0.2", 00:14:14.376 "trsvcid": "4420" 00:14:14.376 }, 00:14:14.376 "peer_address": { 00:14:14.377 "trtype": "TCP", 00:14:14.377 "adrfam": "IPv4", 00:14:14.377 "traddr": "10.0.0.1", 00:14:14.377 "trsvcid": "60722" 00:14:14.377 }, 00:14:14.377 "auth": { 00:14:14.377 "state": "completed", 00:14:14.377 "digest": "sha256", 00:14:14.377 "dhgroup": "ffdhe8192" 00:14:14.377 } 00:14:14.377 } 00:14:14.377 ]' 00:14:14.377 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:14.377 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:14.377 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:14.377 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:14.377 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:14.377 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.377 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.377 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.636 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:01:NGExOWVlZTk5YWMxZTQ2NTk3NzU3ZDA2ZjA2YjU3MWa7p1Hc: 00:14:14.636 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:01:NGExOWVlZTk5YWMxZTQ2NTk3NzU3ZDA2ZjA2YjU3MWa7p1Hc: 00:14:15.203 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.203 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:15.203 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.203 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.203 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.203 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:15.203 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:15.203 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:15.203 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:14:15.203 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:15.203 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:15.203 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:15.203 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:15.203 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.203 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:14:15.203 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.203 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.203 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.203 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:15.203 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:15.203 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:15.771 00:14:15.771 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:15.771 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:15.771 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.771 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.771 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.771 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.771 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.771 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.771 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:15.771 { 00:14:15.771 "cntlid": 47, 00:14:15.771 "qid": 0, 00:14:15.771 "state": "enabled", 00:14:15.771 "thread": "nvmf_tgt_poll_group_000", 00:14:15.771 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:15.771 "listen_address": { 00:14:15.771 "trtype": "TCP", 00:14:15.771 "adrfam": "IPv4", 00:14:15.771 "traddr": "10.0.0.2", 00:14:15.771 "trsvcid": "4420" 00:14:15.771 }, 00:14:15.771 "peer_address": { 00:14:15.771 "trtype": "TCP", 00:14:15.771 "adrfam": "IPv4", 00:14:15.771 "traddr": "10.0.0.1", 00:14:15.771 "trsvcid": "43360" 00:14:15.771 }, 00:14:15.771 "auth": { 00:14:15.771 "state": "completed", 00:14:15.771 "digest": "sha256", 00:14:15.771 "dhgroup": "ffdhe8192" 00:14:15.771 } 00:14:15.771 } 00:14:15.771 ]' 00:14:15.771 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:16.030 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:16.030 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:16.030 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:16.030 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:16.030 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.030 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.030 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:16.030 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:14:16.030 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:14:16.659 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.659 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:16.659 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.659 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.659 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.659 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:16.659 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:16.659 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:16.659 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:16.659 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:16.918 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:14:16.918 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:16.918 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:16.918 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:16.918 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:16.918 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.918 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.918 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.918 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.918 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.918 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.918 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.918 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.177 00:14:17.177 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:17.177 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:17.177 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.177 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.177 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.177 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.177 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.177 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.177 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:17.177 { 00:14:17.177 "cntlid": 49, 00:14:17.177 "qid": 0, 00:14:17.177 "state": "enabled", 00:14:17.177 "thread": "nvmf_tgt_poll_group_000", 00:14:17.177 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:17.177 "listen_address": { 00:14:17.177 "trtype": "TCP", 00:14:17.177 "adrfam": "IPv4", 00:14:17.177 "traddr": "10.0.0.2", 00:14:17.177 "trsvcid": "4420" 00:14:17.177 }, 00:14:17.177 "peer_address": { 00:14:17.177 "trtype": "TCP", 00:14:17.177 "adrfam": "IPv4", 00:14:17.177 "traddr": "10.0.0.1", 00:14:17.177 "trsvcid": "43390" 00:14:17.177 }, 00:14:17.177 "auth": { 00:14:17.177 "state": "completed", 00:14:17.177 "digest": "sha384", 00:14:17.177 "dhgroup": "null" 00:14:17.177 } 00:14:17.177 } 00:14:17.177 ]' 00:14:17.177 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:17.177 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:17.177 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:17.436 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:17.436 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:17.436 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.436 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.436 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.436 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:14:17.436 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:14:18.004 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.004 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:18.004 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.004 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.004 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.004 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:18.004 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:18.004 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:18.263 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:14:18.263 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:18.263 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:18.263 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:18.263 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:18.263 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.263 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.263 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.263 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.263 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.263 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.263 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.263 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.521 00:14:18.521 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:18.521 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.521 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:18.521 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.521 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.521 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.521 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.521 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.521 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:18.521 { 00:14:18.521 "cntlid": 51, 00:14:18.521 "qid": 0, 00:14:18.521 "state": "enabled", 00:14:18.521 "thread": "nvmf_tgt_poll_group_000", 00:14:18.521 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:18.521 "listen_address": { 00:14:18.521 "trtype": "TCP", 00:14:18.521 "adrfam": "IPv4", 00:14:18.521 "traddr": "10.0.0.2", 00:14:18.521 "trsvcid": "4420" 00:14:18.521 }, 00:14:18.521 "peer_address": { 00:14:18.521 "trtype": "TCP", 00:14:18.521 "adrfam": "IPv4", 00:14:18.521 "traddr": "10.0.0.1", 00:14:18.521 "trsvcid": "43422" 00:14:18.521 }, 00:14:18.521 "auth": { 00:14:18.521 "state": "completed", 00:14:18.521 "digest": "sha384", 00:14:18.521 "dhgroup": "null" 00:14:18.521 } 00:14:18.521 } 00:14:18.521 ]' 00:14:18.521 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:18.780 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:18.780 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:18.780 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:18.780 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:18.780 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.780 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.780 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.780 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: --dhchap-ctrl-secret DHHC-1:02:YThhYzg5ZDdkNjMyZDk3MWMwYWU3MWQ5MjUyYTgzZTBkYjdhOTg4NzgzOTI5ZGUxkSRd6A==: 00:14:18.780 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: --dhchap-ctrl-secret DHHC-1:02:YThhYzg5ZDdkNjMyZDk3MWMwYWU3MWQ5MjUyYTgzZTBkYjdhOTg4NzgzOTI5ZGUxkSRd6A==: 00:14:19.346 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.346 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:19.346 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.346 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.346 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.346 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:19.346 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:19.346 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:19.604 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:14:19.604 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:19.604 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:19.604 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:19.604 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:19.604 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.604 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.604 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.604 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.604 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.604 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.604 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.604 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.863 00:14:19.863 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:19.863 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:19.863 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.121 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.121 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.121 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.121 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.121 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.121 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:20.121 { 00:14:20.121 "cntlid": 53, 00:14:20.121 "qid": 0, 00:14:20.121 "state": "enabled", 00:14:20.121 "thread": "nvmf_tgt_poll_group_000", 00:14:20.121 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:20.121 "listen_address": { 00:14:20.121 "trtype": "TCP", 00:14:20.121 "adrfam": "IPv4", 00:14:20.121 "traddr": "10.0.0.2", 00:14:20.121 "trsvcid": "4420" 00:14:20.121 }, 00:14:20.121 "peer_address": { 00:14:20.121 "trtype": "TCP", 00:14:20.121 "adrfam": "IPv4", 00:14:20.121 "traddr": "10.0.0.1", 00:14:20.121 "trsvcid": "43458" 00:14:20.121 }, 00:14:20.121 "auth": { 00:14:20.121 "state": "completed", 00:14:20.121 "digest": "sha384", 00:14:20.121 "dhgroup": "null" 00:14:20.121 } 00:14:20.121 } 00:14:20.121 ]' 00:14:20.121 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:20.121 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:20.121 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:20.121 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:20.121 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:20.121 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.121 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.121 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.121 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:01:NGExOWVlZTk5YWMxZTQ2NTk3NzU3ZDA2ZjA2YjU3MWa7p1Hc: 00:14:20.121 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:01:NGExOWVlZTk5YWMxZTQ2NTk3NzU3ZDA2ZjA2YjU3MWa7p1Hc: 00:14:20.716 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.716 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:20.716 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.716 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.716 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.716 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:20.716 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:20.716 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:20.975 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:14:20.975 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:20.975 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:20.975 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:20.975 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:20.975 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.975 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:14:20.975 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.975 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.975 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.975 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:20.975 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:20.975 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:21.233 00:14:21.233 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:21.233 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.233 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:21.234 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.234 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.234 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.234 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.492 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.492 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:21.492 { 00:14:21.492 "cntlid": 55, 00:14:21.492 "qid": 0, 00:14:21.492 "state": "enabled", 00:14:21.492 "thread": "nvmf_tgt_poll_group_000", 00:14:21.492 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:21.492 "listen_address": { 00:14:21.492 "trtype": "TCP", 00:14:21.492 "adrfam": "IPv4", 00:14:21.492 "traddr": "10.0.0.2", 00:14:21.492 "trsvcid": "4420" 00:14:21.492 }, 00:14:21.492 "peer_address": { 00:14:21.492 "trtype": "TCP", 00:14:21.492 "adrfam": "IPv4", 00:14:21.492 "traddr": "10.0.0.1", 00:14:21.492 "trsvcid": "43478" 00:14:21.492 }, 00:14:21.492 "auth": { 00:14:21.492 "state": "completed", 00:14:21.492 "digest": "sha384", 00:14:21.492 "dhgroup": "null" 00:14:21.492 } 00:14:21.492 } 00:14:21.492 ]' 00:14:21.492 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:21.492 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:21.492 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:21.493 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:21.493 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:21.493 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.493 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.493 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.751 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:14:21.751 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:14:22.317 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.317 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:22.317 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.317 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.317 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.317 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:22.317 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:22.317 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:22.317 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:22.317 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:14:22.317 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:22.317 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:22.317 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:22.317 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:22.317 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.317 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.317 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.317 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.317 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.317 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.317 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.317 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.576 00:14:22.576 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:22.576 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:22.576 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.833 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.833 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.833 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.833 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.833 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.833 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:22.833 { 00:14:22.833 "cntlid": 57, 00:14:22.833 "qid": 0, 00:14:22.833 "state": "enabled", 00:14:22.833 "thread": "nvmf_tgt_poll_group_000", 00:14:22.833 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:22.833 "listen_address": { 00:14:22.833 "trtype": "TCP", 00:14:22.833 "adrfam": "IPv4", 00:14:22.833 "traddr": "10.0.0.2", 00:14:22.833 "trsvcid": "4420" 00:14:22.833 }, 00:14:22.833 "peer_address": { 00:14:22.833 "trtype": "TCP", 00:14:22.833 "adrfam": "IPv4", 00:14:22.833 "traddr": "10.0.0.1", 00:14:22.834 "trsvcid": "43514" 00:14:22.834 }, 00:14:22.834 "auth": { 00:14:22.834 "state": "completed", 00:14:22.834 "digest": "sha384", 00:14:22.834 "dhgroup": "ffdhe2048" 00:14:22.834 } 00:14:22.834 } 00:14:22.834 ]' 00:14:22.834 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:22.834 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:22.834 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:22.834 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:22.834 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:22.834 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.834 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.834 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.090 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:14:23.090 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:14:23.657 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.657 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:23.657 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.657 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.657 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.657 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:23.657 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:23.657 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:23.657 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:14:23.657 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:23.657 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:23.657 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:23.657 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:23.657 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.657 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:23.657 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.657 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.657 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.657 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:23.657 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:23.657 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:23.916 00:14:23.916 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:23.916 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:23.916 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.175 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.175 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.175 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.175 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.175 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.175 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:24.175 { 00:14:24.175 "cntlid": 59, 00:14:24.175 "qid": 0, 00:14:24.175 "state": "enabled", 00:14:24.175 "thread": "nvmf_tgt_poll_group_000", 00:14:24.175 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:24.175 "listen_address": { 00:14:24.175 "trtype": "TCP", 00:14:24.175 "adrfam": "IPv4", 00:14:24.175 "traddr": "10.0.0.2", 00:14:24.175 "trsvcid": "4420" 00:14:24.175 }, 00:14:24.175 "peer_address": { 00:14:24.175 "trtype": "TCP", 00:14:24.175 "adrfam": "IPv4", 00:14:24.175 "traddr": "10.0.0.1", 00:14:24.175 "trsvcid": "43526" 00:14:24.175 }, 00:14:24.175 "auth": { 00:14:24.175 "state": "completed", 00:14:24.175 "digest": "sha384", 00:14:24.175 "dhgroup": "ffdhe2048" 00:14:24.175 } 00:14:24.175 } 00:14:24.175 ]' 00:14:24.175 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:24.175 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:24.175 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:24.175 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:24.175 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:24.175 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.175 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.175 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.433 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: --dhchap-ctrl-secret DHHC-1:02:YThhYzg5ZDdkNjMyZDk3MWMwYWU3MWQ5MjUyYTgzZTBkYjdhOTg4NzgzOTI5ZGUxkSRd6A==: 00:14:24.433 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: --dhchap-ctrl-secret DHHC-1:02:YThhYzg5ZDdkNjMyZDk3MWMwYWU3MWQ5MjUyYTgzZTBkYjdhOTg4NzgzOTI5ZGUxkSRd6A==: 00:14:25.000 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.000 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:25.000 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.000 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.000 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.000 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:25.000 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:25.000 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:25.000 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:14:25.000 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:25.000 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:25.000 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:25.000 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:25.000 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.000 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.000 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.000 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.000 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.000 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.000 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.000 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.259 00:14:25.259 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:25.259 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:25.259 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.518 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.518 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:25.518 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.518 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.518 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.518 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:25.518 { 00:14:25.518 "cntlid": 61, 00:14:25.518 "qid": 0, 00:14:25.518 "state": "enabled", 00:14:25.518 "thread": "nvmf_tgt_poll_group_000", 00:14:25.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:25.518 "listen_address": { 00:14:25.518 "trtype": "TCP", 00:14:25.518 "adrfam": "IPv4", 00:14:25.518 "traddr": "10.0.0.2", 00:14:25.518 "trsvcid": "4420" 00:14:25.518 }, 00:14:25.518 "peer_address": { 00:14:25.518 "trtype": "TCP", 00:14:25.518 "adrfam": "IPv4", 00:14:25.518 "traddr": "10.0.0.1", 00:14:25.518 "trsvcid": "55914" 00:14:25.518 }, 00:14:25.518 "auth": { 00:14:25.518 "state": "completed", 00:14:25.518 "digest": "sha384", 00:14:25.518 "dhgroup": "ffdhe2048" 00:14:25.518 } 00:14:25.518 } 00:14:25.518 ]' 00:14:25.518 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:25.518 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:25.518 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:25.518 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:25.518 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:25.518 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:25.518 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:25.518 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.777 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:01:NGExOWVlZTk5YWMxZTQ2NTk3NzU3ZDA2ZjA2YjU3MWa7p1Hc: 00:14:25.777 13:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:01:NGExOWVlZTk5YWMxZTQ2NTk3NzU3ZDA2ZjA2YjU3MWa7p1Hc: 00:14:26.345 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.345 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:26.345 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.345 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.345 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.345 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:26.345 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:26.345 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:26.603 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:14:26.603 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:26.603 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:26.603 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:26.603 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:26.603 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.603 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:14:26.603 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.603 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.603 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.603 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:26.603 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:26.603 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:26.603 00:14:26.861 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:26.861 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.861 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:26.861 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.861 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.861 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.861 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.861 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.861 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:26.861 { 00:14:26.861 "cntlid": 63, 00:14:26.861 "qid": 0, 00:14:26.861 "state": "enabled", 00:14:26.862 "thread": "nvmf_tgt_poll_group_000", 00:14:26.862 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:26.862 "listen_address": { 00:14:26.862 "trtype": "TCP", 00:14:26.862 "adrfam": "IPv4", 00:14:26.862 "traddr": "10.0.0.2", 00:14:26.862 "trsvcid": "4420" 00:14:26.862 }, 00:14:26.862 "peer_address": { 00:14:26.862 "trtype": "TCP", 00:14:26.862 "adrfam": "IPv4", 00:14:26.862 "traddr": "10.0.0.1", 00:14:26.862 "trsvcid": "55950" 00:14:26.862 }, 00:14:26.862 "auth": { 00:14:26.862 "state": "completed", 00:14:26.862 "digest": "sha384", 00:14:26.862 "dhgroup": "ffdhe2048" 00:14:26.862 } 00:14:26.862 } 00:14:26.862 ]' 00:14:26.862 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:26.862 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:26.862 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:26.862 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:26.862 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:27.120 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.120 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.120 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.120 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:14:27.120 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:14:27.689 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.689 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:27.689 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.689 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.689 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.689 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:27.689 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:27.689 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:27.689 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:27.949 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:14:27.949 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:27.949 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:27.949 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:27.949 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:27.949 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.949 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:27.949 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.949 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.949 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.949 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:27.949 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:27.949 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.207 00:14:28.207 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:28.207 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:28.207 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.207 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.207 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.207 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.207 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.207 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.207 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:28.207 { 00:14:28.207 "cntlid": 65, 00:14:28.207 "qid": 0, 00:14:28.207 "state": "enabled", 00:14:28.207 "thread": "nvmf_tgt_poll_group_000", 00:14:28.207 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:28.207 "listen_address": { 00:14:28.207 "trtype": "TCP", 00:14:28.207 "adrfam": "IPv4", 00:14:28.207 "traddr": "10.0.0.2", 00:14:28.207 "trsvcid": "4420" 00:14:28.207 }, 00:14:28.207 "peer_address": { 00:14:28.207 "trtype": "TCP", 00:14:28.207 "adrfam": "IPv4", 00:14:28.207 "traddr": "10.0.0.1", 00:14:28.207 "trsvcid": "55974" 00:14:28.207 }, 00:14:28.207 "auth": { 00:14:28.207 "state": "completed", 00:14:28.207 "digest": "sha384", 00:14:28.207 "dhgroup": "ffdhe3072" 00:14:28.207 } 00:14:28.207 } 00:14:28.207 ]' 00:14:28.207 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:28.207 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:28.207 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:28.466 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:28.466 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:28.466 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.466 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.466 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.466 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:14:28.466 13:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:14:29.033 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:29.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:29.033 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:29.033 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.033 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.033 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.033 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:29.033 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:29.033 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:29.291 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:14:29.291 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:29.291 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:29.291 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:29.291 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:29.291 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.291 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:29.291 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.291 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.291 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.291 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:29.291 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:29.291 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:29.550 00:14:29.550 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:29.550 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:29.550 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.550 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.550 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.550 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.550 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.810 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.810 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:29.810 { 00:14:29.810 "cntlid": 67, 00:14:29.810 "qid": 0, 00:14:29.810 "state": "enabled", 00:14:29.810 "thread": "nvmf_tgt_poll_group_000", 00:14:29.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:29.810 "listen_address": { 00:14:29.810 "trtype": "TCP", 00:14:29.810 "adrfam": "IPv4", 00:14:29.810 "traddr": "10.0.0.2", 00:14:29.810 "trsvcid": "4420" 00:14:29.810 }, 00:14:29.810 "peer_address": { 00:14:29.810 "trtype": "TCP", 00:14:29.810 "adrfam": "IPv4", 00:14:29.810 "traddr": "10.0.0.1", 00:14:29.810 "trsvcid": "55998" 00:14:29.810 }, 00:14:29.810 "auth": { 00:14:29.810 "state": "completed", 00:14:29.810 "digest": "sha384", 00:14:29.810 "dhgroup": "ffdhe3072" 00:14:29.810 } 00:14:29.810 } 00:14:29.810 ]' 00:14:29.810 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:29.810 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:29.810 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:29.810 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:29.810 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:29.810 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.810 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.810 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.810 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: --dhchap-ctrl-secret DHHC-1:02:YThhYzg5ZDdkNjMyZDk3MWMwYWU3MWQ5MjUyYTgzZTBkYjdhOTg4NzgzOTI5ZGUxkSRd6A==: 00:14:29.810 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: --dhchap-ctrl-secret DHHC-1:02:YThhYzg5ZDdkNjMyZDk3MWMwYWU3MWQ5MjUyYTgzZTBkYjdhOTg4NzgzOTI5ZGUxkSRd6A==: 00:14:30.377 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.377 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:30.377 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.377 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.377 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.377 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:30.377 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:30.377 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:30.636 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:14:30.636 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:30.636 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:30.636 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:30.636 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:30.636 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.636 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.636 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.636 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.636 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.636 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.636 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.636 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.895 00:14:30.895 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:30.895 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:30.895 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.155 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.155 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.155 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.155 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.155 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.155 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:31.155 { 00:14:31.155 "cntlid": 69, 00:14:31.155 "qid": 0, 00:14:31.155 "state": "enabled", 00:14:31.155 "thread": "nvmf_tgt_poll_group_000", 00:14:31.155 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:31.155 "listen_address": { 00:14:31.155 "trtype": "TCP", 00:14:31.155 "adrfam": "IPv4", 00:14:31.155 "traddr": "10.0.0.2", 00:14:31.155 "trsvcid": "4420" 00:14:31.155 }, 00:14:31.155 "peer_address": { 00:14:31.155 "trtype": "TCP", 00:14:31.155 "adrfam": "IPv4", 00:14:31.155 "traddr": "10.0.0.1", 00:14:31.155 "trsvcid": "56024" 00:14:31.155 }, 00:14:31.155 "auth": { 00:14:31.155 "state": "completed", 00:14:31.155 "digest": "sha384", 00:14:31.155 "dhgroup": "ffdhe3072" 00:14:31.155 } 00:14:31.155 } 00:14:31.155 ]' 00:14:31.155 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:31.155 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:31.155 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:31.155 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:31.155 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:31.155 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.155 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.155 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.414 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:01:NGExOWVlZTk5YWMxZTQ2NTk3NzU3ZDA2ZjA2YjU3MWa7p1Hc: 00:14:31.414 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:01:NGExOWVlZTk5YWMxZTQ2NTk3NzU3ZDA2ZjA2YjU3MWa7p1Hc: 00:14:31.981 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.981 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:31.981 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.981 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.981 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.981 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:31.981 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:31.981 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:31.981 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:14:31.981 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:31.981 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:31.981 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:31.981 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:31.981 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.981 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:14:31.981 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.981 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.981 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.981 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:31.981 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:31.981 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:32.239 00:14:32.239 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:32.240 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:32.240 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.498 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.498 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.498 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.498 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.498 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.498 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:32.498 { 00:14:32.498 "cntlid": 71, 00:14:32.498 "qid": 0, 00:14:32.498 "state": "enabled", 00:14:32.498 "thread": "nvmf_tgt_poll_group_000", 00:14:32.498 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:32.498 "listen_address": { 00:14:32.498 "trtype": "TCP", 00:14:32.498 "adrfam": "IPv4", 00:14:32.498 "traddr": "10.0.0.2", 00:14:32.498 "trsvcid": "4420" 00:14:32.498 }, 00:14:32.498 "peer_address": { 00:14:32.498 "trtype": "TCP", 00:14:32.498 "adrfam": "IPv4", 00:14:32.498 "traddr": "10.0.0.1", 00:14:32.498 "trsvcid": "56038" 00:14:32.498 }, 00:14:32.498 "auth": { 00:14:32.498 "state": "completed", 00:14:32.498 "digest": "sha384", 00:14:32.498 "dhgroup": "ffdhe3072" 00:14:32.498 } 00:14:32.498 } 00:14:32.498 ]' 00:14:32.498 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:32.498 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:32.498 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:32.498 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:32.498 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:32.498 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.498 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.499 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.757 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:14:32.757 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:14:33.325 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.325 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:33.325 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.325 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.325 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.325 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:33.325 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:33.325 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:33.325 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:33.325 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:14:33.325 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:33.325 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:33.325 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:33.325 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:33.325 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.325 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:33.325 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.325 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.325 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.325 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:33.325 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:33.325 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:33.584 00:14:33.584 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:33.584 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:33.584 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.843 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.843 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.843 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.843 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.843 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.843 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:33.843 { 00:14:33.843 "cntlid": 73, 00:14:33.843 "qid": 0, 00:14:33.843 "state": "enabled", 00:14:33.843 "thread": "nvmf_tgt_poll_group_000", 00:14:33.843 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:33.843 "listen_address": { 00:14:33.843 "trtype": "TCP", 00:14:33.843 "adrfam": "IPv4", 00:14:33.843 "traddr": "10.0.0.2", 00:14:33.843 "trsvcid": "4420" 00:14:33.843 }, 00:14:33.843 "peer_address": { 00:14:33.843 "trtype": "TCP", 00:14:33.843 "adrfam": "IPv4", 00:14:33.843 "traddr": "10.0.0.1", 00:14:33.843 "trsvcid": "56058" 00:14:33.843 }, 00:14:33.843 "auth": { 00:14:33.843 "state": "completed", 00:14:33.843 "digest": "sha384", 00:14:33.843 "dhgroup": "ffdhe4096" 00:14:33.843 } 00:14:33.843 } 00:14:33.843 ]' 00:14:33.843 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:33.843 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:33.843 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:33.843 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:33.843 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:33.843 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.843 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.843 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.102 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:14:34.102 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:14:34.671 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.671 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:34.671 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.671 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.671 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.671 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:34.671 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:34.671 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:34.929 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:14:34.929 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:34.929 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:34.929 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:34.929 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:34.929 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.929 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.929 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.929 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.929 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.929 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.929 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.929 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:35.187 00:14:35.187 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:35.187 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.187 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:35.187 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.187 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.187 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.187 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.187 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.187 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:35.187 { 00:14:35.187 "cntlid": 75, 00:14:35.187 "qid": 0, 00:14:35.187 "state": "enabled", 00:14:35.187 "thread": "nvmf_tgt_poll_group_000", 00:14:35.187 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:35.187 "listen_address": { 00:14:35.187 "trtype": "TCP", 00:14:35.187 "adrfam": "IPv4", 00:14:35.187 "traddr": "10.0.0.2", 00:14:35.187 "trsvcid": "4420" 00:14:35.187 }, 00:14:35.187 "peer_address": { 00:14:35.187 "trtype": "TCP", 00:14:35.187 "adrfam": "IPv4", 00:14:35.187 "traddr": "10.0.0.1", 00:14:35.187 "trsvcid": "44464" 00:14:35.187 }, 00:14:35.187 "auth": { 00:14:35.187 "state": "completed", 00:14:35.187 "digest": "sha384", 00:14:35.187 "dhgroup": "ffdhe4096" 00:14:35.187 } 00:14:35.187 } 00:14:35.187 ]' 00:14:35.187 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:35.187 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:35.187 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:35.447 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:35.447 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:35.447 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.447 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.447 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.447 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: --dhchap-ctrl-secret DHHC-1:02:YThhYzg5ZDdkNjMyZDk3MWMwYWU3MWQ5MjUyYTgzZTBkYjdhOTg4NzgzOTI5ZGUxkSRd6A==: 00:14:35.447 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: --dhchap-ctrl-secret DHHC-1:02:YThhYzg5ZDdkNjMyZDk3MWMwYWU3MWQ5MjUyYTgzZTBkYjdhOTg4NzgzOTI5ZGUxkSRd6A==: 00:14:36.015 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.015 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:36.015 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.015 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.015 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.015 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:36.015 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:36.015 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:36.275 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:14:36.275 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:36.275 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:36.275 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:36.275 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:36.275 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.275 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.275 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.275 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.275 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.275 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.275 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.275 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.534 00:14:36.534 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:36.534 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:36.534 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.792 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.792 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.792 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.792 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.792 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.792 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:36.792 { 00:14:36.792 "cntlid": 77, 00:14:36.792 "qid": 0, 00:14:36.792 "state": "enabled", 00:14:36.792 "thread": "nvmf_tgt_poll_group_000", 00:14:36.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:36.792 "listen_address": { 00:14:36.792 "trtype": "TCP", 00:14:36.792 "adrfam": "IPv4", 00:14:36.792 "traddr": "10.0.0.2", 00:14:36.792 "trsvcid": "4420" 00:14:36.792 }, 00:14:36.792 "peer_address": { 00:14:36.792 "trtype": "TCP", 00:14:36.792 "adrfam": "IPv4", 00:14:36.792 "traddr": "10.0.0.1", 00:14:36.792 "trsvcid": "44494" 00:14:36.792 }, 00:14:36.792 "auth": { 00:14:36.792 "state": "completed", 00:14:36.792 "digest": "sha384", 00:14:36.792 "dhgroup": "ffdhe4096" 00:14:36.792 } 00:14:36.792 } 00:14:36.792 ]' 00:14:36.792 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:36.792 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:36.792 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:36.792 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:36.792 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:36.792 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.792 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.792 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.050 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:01:NGExOWVlZTk5YWMxZTQ2NTk3NzU3ZDA2ZjA2YjU3MWa7p1Hc: 00:14:37.050 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:01:NGExOWVlZTk5YWMxZTQ2NTk3NzU3ZDA2ZjA2YjU3MWa7p1Hc: 00:14:37.618 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.618 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:37.618 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.618 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.618 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.618 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:37.618 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:37.618 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:37.618 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:14:37.618 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:37.618 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:37.618 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:37.618 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:37.618 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.618 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:14:37.618 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.618 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.618 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.618 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:37.618 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:37.618 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:37.877 00:14:37.877 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:37.877 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:37.877 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.136 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.136 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.136 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.136 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.136 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.136 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:38.136 { 00:14:38.136 "cntlid": 79, 00:14:38.136 "qid": 0, 00:14:38.136 "state": "enabled", 00:14:38.136 "thread": "nvmf_tgt_poll_group_000", 00:14:38.136 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:38.136 "listen_address": { 00:14:38.136 "trtype": "TCP", 00:14:38.136 "adrfam": "IPv4", 00:14:38.136 "traddr": "10.0.0.2", 00:14:38.136 "trsvcid": "4420" 00:14:38.136 }, 00:14:38.136 "peer_address": { 00:14:38.136 "trtype": "TCP", 00:14:38.136 "adrfam": "IPv4", 00:14:38.136 "traddr": "10.0.0.1", 00:14:38.136 "trsvcid": "44516" 00:14:38.136 }, 00:14:38.136 "auth": { 00:14:38.136 "state": "completed", 00:14:38.136 "digest": "sha384", 00:14:38.136 "dhgroup": "ffdhe4096" 00:14:38.136 } 00:14:38.136 } 00:14:38.136 ]' 00:14:38.136 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:38.136 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:38.136 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:38.136 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:38.136 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:38.136 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.136 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.136 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.394 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:14:38.394 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:14:38.961 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.961 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:38.961 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.961 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.961 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.961 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:38.961 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:38.961 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:38.961 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:38.961 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:14:38.961 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:38.961 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:38.961 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:38.961 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:38.961 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.961 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.961 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.961 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.961 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.961 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.961 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.961 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.528 00:14:39.528 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:39.528 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.528 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:39.528 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.528 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.528 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.528 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.528 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.528 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:39.528 { 00:14:39.528 "cntlid": 81, 00:14:39.528 "qid": 0, 00:14:39.528 "state": "enabled", 00:14:39.528 "thread": "nvmf_tgt_poll_group_000", 00:14:39.528 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:39.528 "listen_address": { 00:14:39.528 "trtype": "TCP", 00:14:39.528 "adrfam": "IPv4", 00:14:39.528 "traddr": "10.0.0.2", 00:14:39.528 "trsvcid": "4420" 00:14:39.528 }, 00:14:39.528 "peer_address": { 00:14:39.528 "trtype": "TCP", 00:14:39.528 "adrfam": "IPv4", 00:14:39.528 "traddr": "10.0.0.1", 00:14:39.528 "trsvcid": "44544" 00:14:39.528 }, 00:14:39.528 "auth": { 00:14:39.528 "state": "completed", 00:14:39.528 "digest": "sha384", 00:14:39.528 "dhgroup": "ffdhe6144" 00:14:39.528 } 00:14:39.528 } 00:14:39.528 ]' 00:14:39.528 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:39.528 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:39.528 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:39.528 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:39.528 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:39.528 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.528 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.528 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.786 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:14:39.786 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:14:40.353 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.353 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:40.353 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.353 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.353 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.353 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:40.353 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:40.353 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:40.612 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:14:40.612 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:40.612 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:40.612 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:40.612 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:40.612 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.612 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.612 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.612 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.612 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.612 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.612 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.612 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.869 00:14:40.869 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:40.869 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:40.869 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.127 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.127 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.127 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.127 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.127 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.127 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:41.127 { 00:14:41.127 "cntlid": 83, 00:14:41.127 "qid": 0, 00:14:41.127 "state": "enabled", 00:14:41.127 "thread": "nvmf_tgt_poll_group_000", 00:14:41.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:41.127 "listen_address": { 00:14:41.127 "trtype": "TCP", 00:14:41.127 "adrfam": "IPv4", 00:14:41.127 "traddr": "10.0.0.2", 00:14:41.127 "trsvcid": "4420" 00:14:41.127 }, 00:14:41.127 "peer_address": { 00:14:41.127 "trtype": "TCP", 00:14:41.127 "adrfam": "IPv4", 00:14:41.127 "traddr": "10.0.0.1", 00:14:41.127 "trsvcid": "44580" 00:14:41.127 }, 00:14:41.127 "auth": { 00:14:41.127 "state": "completed", 00:14:41.127 "digest": "sha384", 00:14:41.127 "dhgroup": "ffdhe6144" 00:14:41.127 } 00:14:41.127 } 00:14:41.127 ]' 00:14:41.127 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:41.127 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:41.127 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:41.127 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:41.127 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:41.127 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.127 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.127 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.386 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: --dhchap-ctrl-secret DHHC-1:02:YThhYzg5ZDdkNjMyZDk3MWMwYWU3MWQ5MjUyYTgzZTBkYjdhOTg4NzgzOTI5ZGUxkSRd6A==: 00:14:41.386 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: --dhchap-ctrl-secret DHHC-1:02:YThhYzg5ZDdkNjMyZDk3MWMwYWU3MWQ5MjUyYTgzZTBkYjdhOTg4NzgzOTI5ZGUxkSRd6A==: 00:14:41.953 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.953 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:41.953 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.953 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.953 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.953 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:41.953 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:41.953 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:41.953 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:14:41.953 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:41.953 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:41.953 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:41.953 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:41.953 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.953 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.953 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.953 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.953 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.953 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.953 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.953 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:42.520 00:14:42.520 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:42.520 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.520 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:42.520 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.520 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.520 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.520 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.520 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.520 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:42.520 { 00:14:42.520 "cntlid": 85, 00:14:42.520 "qid": 0, 00:14:42.520 "state": "enabled", 00:14:42.520 "thread": "nvmf_tgt_poll_group_000", 00:14:42.520 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:42.520 "listen_address": { 00:14:42.520 "trtype": "TCP", 00:14:42.520 "adrfam": "IPv4", 00:14:42.520 "traddr": "10.0.0.2", 00:14:42.520 "trsvcid": "4420" 00:14:42.520 }, 00:14:42.520 "peer_address": { 00:14:42.520 "trtype": "TCP", 00:14:42.520 "adrfam": "IPv4", 00:14:42.520 "traddr": "10.0.0.1", 00:14:42.520 "trsvcid": "44602" 00:14:42.520 }, 00:14:42.520 "auth": { 00:14:42.520 "state": "completed", 00:14:42.520 "digest": "sha384", 00:14:42.520 "dhgroup": "ffdhe6144" 00:14:42.520 } 00:14:42.520 } 00:14:42.520 ]' 00:14:42.520 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:42.520 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:42.520 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:42.520 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:42.520 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:42.520 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.520 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.520 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.779 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:01:NGExOWVlZTk5YWMxZTQ2NTk3NzU3ZDA2ZjA2YjU3MWa7p1Hc: 00:14:42.779 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:01:NGExOWVlZTk5YWMxZTQ2NTk3NzU3ZDA2ZjA2YjU3MWa7p1Hc: 00:14:43.344 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.344 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:43.344 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.344 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.344 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.344 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:43.344 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:43.344 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:43.603 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:14:43.603 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:43.603 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:43.603 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:43.603 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:43.603 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.603 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:14:43.603 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.603 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.603 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.603 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:43.603 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:43.603 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:43.860 00:14:43.860 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:43.860 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.860 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:44.119 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.120 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.120 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.120 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.120 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.120 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:44.120 { 00:14:44.120 "cntlid": 87, 00:14:44.120 "qid": 0, 00:14:44.120 "state": "enabled", 00:14:44.120 "thread": "nvmf_tgt_poll_group_000", 00:14:44.120 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:44.120 "listen_address": { 00:14:44.120 "trtype": "TCP", 00:14:44.120 "adrfam": "IPv4", 00:14:44.120 "traddr": "10.0.0.2", 00:14:44.120 "trsvcid": "4420" 00:14:44.120 }, 00:14:44.120 "peer_address": { 00:14:44.120 "trtype": "TCP", 00:14:44.120 "adrfam": "IPv4", 00:14:44.120 "traddr": "10.0.0.1", 00:14:44.120 "trsvcid": "44628" 00:14:44.120 }, 00:14:44.120 "auth": { 00:14:44.120 "state": "completed", 00:14:44.120 "digest": "sha384", 00:14:44.120 "dhgroup": "ffdhe6144" 00:14:44.120 } 00:14:44.120 } 00:14:44.120 ]' 00:14:44.120 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:44.120 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:44.120 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:44.120 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:44.120 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:44.120 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.120 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.120 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.378 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:14:44.378 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:14:44.945 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.945 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:44.945 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.945 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.945 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.945 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:44.945 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:44.945 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:44.945 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:44.945 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:14:44.945 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:44.945 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:44.945 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:44.945 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:44.945 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:44.945 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:44.945 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.945 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.945 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.945 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:44.945 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:44.945 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.513 00:14:45.513 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:45.513 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.513 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:45.513 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.513 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.513 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.513 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.513 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.513 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:45.513 { 00:14:45.513 "cntlid": 89, 00:14:45.513 "qid": 0, 00:14:45.513 "state": "enabled", 00:14:45.513 "thread": "nvmf_tgt_poll_group_000", 00:14:45.513 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:45.513 "listen_address": { 00:14:45.513 "trtype": "TCP", 00:14:45.513 "adrfam": "IPv4", 00:14:45.513 "traddr": "10.0.0.2", 00:14:45.513 "trsvcid": "4420" 00:14:45.513 }, 00:14:45.513 "peer_address": { 00:14:45.513 "trtype": "TCP", 00:14:45.513 "adrfam": "IPv4", 00:14:45.513 "traddr": "10.0.0.1", 00:14:45.513 "trsvcid": "59864" 00:14:45.513 }, 00:14:45.513 "auth": { 00:14:45.513 "state": "completed", 00:14:45.513 "digest": "sha384", 00:14:45.513 "dhgroup": "ffdhe8192" 00:14:45.513 } 00:14:45.513 } 00:14:45.513 ]' 00:14:45.771 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:45.771 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:45.771 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:45.771 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:45.771 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:45.771 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.771 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.771 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.771 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:14:45.771 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:14:46.339 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.339 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:46.339 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.339 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.339 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.339 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:46.339 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:46.339 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:46.598 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:14:46.598 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:46.598 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:46.598 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:46.598 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:46.598 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.598 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.598 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.598 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.598 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.598 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.598 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.598 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.165 00:14:47.165 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:47.165 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:47.165 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.165 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.165 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.165 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.165 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.165 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.165 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:47.165 { 00:14:47.165 "cntlid": 91, 00:14:47.165 "qid": 0, 00:14:47.165 "state": "enabled", 00:14:47.165 "thread": "nvmf_tgt_poll_group_000", 00:14:47.165 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:47.165 "listen_address": { 00:14:47.165 "trtype": "TCP", 00:14:47.165 "adrfam": "IPv4", 00:14:47.165 "traddr": "10.0.0.2", 00:14:47.165 "trsvcid": "4420" 00:14:47.165 }, 00:14:47.165 "peer_address": { 00:14:47.165 "trtype": "TCP", 00:14:47.165 "adrfam": "IPv4", 00:14:47.165 "traddr": "10.0.0.1", 00:14:47.165 "trsvcid": "59882" 00:14:47.165 }, 00:14:47.165 "auth": { 00:14:47.165 "state": "completed", 00:14:47.165 "digest": "sha384", 00:14:47.165 "dhgroup": "ffdhe8192" 00:14:47.165 } 00:14:47.165 } 00:14:47.165 ]' 00:14:47.165 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:47.165 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:47.165 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:47.424 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:47.424 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:47.424 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.424 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.424 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.424 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: --dhchap-ctrl-secret DHHC-1:02:YThhYzg5ZDdkNjMyZDk3MWMwYWU3MWQ5MjUyYTgzZTBkYjdhOTg4NzgzOTI5ZGUxkSRd6A==: 00:14:47.424 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: --dhchap-ctrl-secret DHHC-1:02:YThhYzg5ZDdkNjMyZDk3MWMwYWU3MWQ5MjUyYTgzZTBkYjdhOTg4NzgzOTI5ZGUxkSRd6A==: 00:14:47.991 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.991 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:47.991 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.991 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.991 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.991 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:47.991 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:47.991 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:48.251 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:14:48.251 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:48.251 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:48.251 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:48.251 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:48.251 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.251 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.251 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.251 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.251 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.251 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.251 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.251 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.817 00:14:48.817 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:48.817 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.817 13:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:48.817 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.817 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.817 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.817 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.817 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.817 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:48.817 { 00:14:48.817 "cntlid": 93, 00:14:48.817 "qid": 0, 00:14:48.817 "state": "enabled", 00:14:48.817 "thread": "nvmf_tgt_poll_group_000", 00:14:48.817 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:48.817 "listen_address": { 00:14:48.817 "trtype": "TCP", 00:14:48.817 "adrfam": "IPv4", 00:14:48.817 "traddr": "10.0.0.2", 00:14:48.817 "trsvcid": "4420" 00:14:48.817 }, 00:14:48.817 "peer_address": { 00:14:48.817 "trtype": "TCP", 00:14:48.817 "adrfam": "IPv4", 00:14:48.817 "traddr": "10.0.0.1", 00:14:48.817 "trsvcid": "59926" 00:14:48.817 }, 00:14:48.817 "auth": { 00:14:48.817 "state": "completed", 00:14:48.817 "digest": "sha384", 00:14:48.817 "dhgroup": "ffdhe8192" 00:14:48.817 } 00:14:48.817 } 00:14:48.817 ]' 00:14:48.817 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:48.817 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:48.817 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:48.817 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:48.817 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:49.075 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.075 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.075 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.075 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:01:NGExOWVlZTk5YWMxZTQ2NTk3NzU3ZDA2ZjA2YjU3MWa7p1Hc: 00:14:49.075 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:01:NGExOWVlZTk5YWMxZTQ2NTk3NzU3ZDA2ZjA2YjU3MWa7p1Hc: 00:14:49.643 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.643 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:49.643 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.643 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.643 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.643 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:49.643 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:49.643 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:49.902 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:14:49.902 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:49.902 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:49.902 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:49.902 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:49.902 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.902 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:14:49.902 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.902 13:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.902 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.902 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:49.902 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:49.902 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:50.470 00:14:50.470 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:50.470 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:50.470 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.470 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.470 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.470 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.470 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.470 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.470 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:50.470 { 00:14:50.470 "cntlid": 95, 00:14:50.470 "qid": 0, 00:14:50.470 "state": "enabled", 00:14:50.470 "thread": "nvmf_tgt_poll_group_000", 00:14:50.470 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:50.470 "listen_address": { 00:14:50.470 "trtype": "TCP", 00:14:50.470 "adrfam": "IPv4", 00:14:50.470 "traddr": "10.0.0.2", 00:14:50.470 "trsvcid": "4420" 00:14:50.470 }, 00:14:50.470 "peer_address": { 00:14:50.470 "trtype": "TCP", 00:14:50.470 "adrfam": "IPv4", 00:14:50.470 "traddr": "10.0.0.1", 00:14:50.470 "trsvcid": "59942" 00:14:50.470 }, 00:14:50.470 "auth": { 00:14:50.470 "state": "completed", 00:14:50.470 "digest": "sha384", 00:14:50.470 "dhgroup": "ffdhe8192" 00:14:50.470 } 00:14:50.470 } 00:14:50.470 ]' 00:14:50.470 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:50.470 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:50.470 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:50.470 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:50.470 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:50.470 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.470 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.470 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.727 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:14:50.727 13:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:14:51.296 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.296 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:51.296 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.296 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.296 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.296 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:51.296 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:51.296 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:51.296 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:51.296 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:51.555 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:14:51.555 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:51.555 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:51.555 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:51.555 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:51.555 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.555 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.555 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.555 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.555 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.555 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.555 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.556 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.556 00:14:51.556 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:51.556 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.556 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:51.815 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.815 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.815 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.815 13:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.815 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.815 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:51.815 { 00:14:51.815 "cntlid": 97, 00:14:51.815 "qid": 0, 00:14:51.815 "state": "enabled", 00:14:51.815 "thread": "nvmf_tgt_poll_group_000", 00:14:51.815 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:51.815 "listen_address": { 00:14:51.815 "trtype": "TCP", 00:14:51.815 "adrfam": "IPv4", 00:14:51.815 "traddr": "10.0.0.2", 00:14:51.815 "trsvcid": "4420" 00:14:51.815 }, 00:14:51.815 "peer_address": { 00:14:51.815 "trtype": "TCP", 00:14:51.815 "adrfam": "IPv4", 00:14:51.815 "traddr": "10.0.0.1", 00:14:51.815 "trsvcid": "59972" 00:14:51.815 }, 00:14:51.815 "auth": { 00:14:51.815 "state": "completed", 00:14:51.815 "digest": "sha512", 00:14:51.815 "dhgroup": "null" 00:14:51.815 } 00:14:51.815 } 00:14:51.815 ]' 00:14:51.815 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:51.815 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:51.815 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:51.815 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:51.815 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:51.815 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.815 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.815 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.073 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:14:52.073 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:14:52.640 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.640 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:52.640 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.640 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.640 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.640 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:52.640 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:52.640 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:52.899 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:14:52.900 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:52.900 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:52.900 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:52.900 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:52.900 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.900 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.900 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.900 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.900 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.900 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.900 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.900 13:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.159 00:14:53.159 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:53.159 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:53.159 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.159 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.159 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.159 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.159 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.159 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.159 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:53.159 { 00:14:53.159 "cntlid": 99, 00:14:53.159 "qid": 0, 00:14:53.159 "state": "enabled", 00:14:53.159 "thread": "nvmf_tgt_poll_group_000", 00:14:53.159 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:53.159 "listen_address": { 00:14:53.159 "trtype": "TCP", 00:14:53.159 "adrfam": "IPv4", 00:14:53.159 "traddr": "10.0.0.2", 00:14:53.159 "trsvcid": "4420" 00:14:53.159 }, 00:14:53.159 "peer_address": { 00:14:53.159 "trtype": "TCP", 00:14:53.159 "adrfam": "IPv4", 00:14:53.159 "traddr": "10.0.0.1", 00:14:53.159 "trsvcid": "59994" 00:14:53.159 }, 00:14:53.159 "auth": { 00:14:53.159 "state": "completed", 00:14:53.159 "digest": "sha512", 00:14:53.159 "dhgroup": "null" 00:14:53.159 } 00:14:53.159 } 00:14:53.159 ]' 00:14:53.159 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:53.159 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:53.159 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:53.159 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:53.159 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:53.418 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.418 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.418 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.418 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: --dhchap-ctrl-secret DHHC-1:02:YThhYzg5ZDdkNjMyZDk3MWMwYWU3MWQ5MjUyYTgzZTBkYjdhOTg4NzgzOTI5ZGUxkSRd6A==: 00:14:53.418 13:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: --dhchap-ctrl-secret DHHC-1:02:YThhYzg5ZDdkNjMyZDk3MWMwYWU3MWQ5MjUyYTgzZTBkYjdhOTg4NzgzOTI5ZGUxkSRd6A==: 00:14:53.985 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.985 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:53.985 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.985 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.985 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.985 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:53.985 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:53.985 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:54.243 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:14:54.243 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:54.243 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:54.243 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:54.243 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:54.243 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.243 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.243 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.243 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.243 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.243 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.243 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.243 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.502 00:14:54.502 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:54.502 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:54.502 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.502 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.502 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.502 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.502 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.502 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.502 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:54.502 { 00:14:54.502 "cntlid": 101, 00:14:54.502 "qid": 0, 00:14:54.502 "state": "enabled", 00:14:54.502 "thread": "nvmf_tgt_poll_group_000", 00:14:54.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:54.502 "listen_address": { 00:14:54.502 "trtype": "TCP", 00:14:54.502 "adrfam": "IPv4", 00:14:54.502 "traddr": "10.0.0.2", 00:14:54.502 "trsvcid": "4420" 00:14:54.502 }, 00:14:54.502 "peer_address": { 00:14:54.502 "trtype": "TCP", 00:14:54.502 "adrfam": "IPv4", 00:14:54.502 "traddr": "10.0.0.1", 00:14:54.502 "trsvcid": "60024" 00:14:54.502 }, 00:14:54.502 "auth": { 00:14:54.502 "state": "completed", 00:14:54.502 "digest": "sha512", 00:14:54.502 "dhgroup": "null" 00:14:54.502 } 00:14:54.502 } 00:14:54.502 ]' 00:14:54.502 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:54.760 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:54.760 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:54.760 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:54.760 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:54.760 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.760 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.760 13:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.760 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:01:NGExOWVlZTk5YWMxZTQ2NTk3NzU3ZDA2ZjA2YjU3MWa7p1Hc: 00:14:54.761 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:01:NGExOWVlZTk5YWMxZTQ2NTk3NzU3ZDA2ZjA2YjU3MWa7p1Hc: 00:14:55.327 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.327 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:55.327 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.327 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.327 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.327 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:55.327 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:55.327 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:55.587 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:14:55.587 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:55.587 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:55.587 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:55.587 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:55.587 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.587 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:14:55.587 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.587 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.587 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.587 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:55.587 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:55.587 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:55.846 00:14:55.846 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:55.846 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.846 13:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:55.846 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.846 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.846 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.846 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.106 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.106 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:56.106 { 00:14:56.106 "cntlid": 103, 00:14:56.106 "qid": 0, 00:14:56.106 "state": "enabled", 00:14:56.106 "thread": "nvmf_tgt_poll_group_000", 00:14:56.106 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:56.106 "listen_address": { 00:14:56.106 "trtype": "TCP", 00:14:56.106 "adrfam": "IPv4", 00:14:56.106 "traddr": "10.0.0.2", 00:14:56.106 "trsvcid": "4420" 00:14:56.106 }, 00:14:56.106 "peer_address": { 00:14:56.106 "trtype": "TCP", 00:14:56.106 "adrfam": "IPv4", 00:14:56.106 "traddr": "10.0.0.1", 00:14:56.106 "trsvcid": "42212" 00:14:56.106 }, 00:14:56.106 "auth": { 00:14:56.106 "state": "completed", 00:14:56.106 "digest": "sha512", 00:14:56.106 "dhgroup": "null" 00:14:56.106 } 00:14:56.106 } 00:14:56.106 ]' 00:14:56.106 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:56.106 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:56.106 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:56.106 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:56.106 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:56.106 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.106 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.106 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.106 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:14:56.106 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:14:56.674 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.674 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:56.674 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.674 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.674 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.674 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:56.674 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:56.674 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:56.674 13:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:56.933 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:14:56.933 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:56.933 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:56.933 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:56.933 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:56.933 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.933 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:56.933 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.933 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.933 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.933 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:56.933 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:56.933 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.192 00:14:57.192 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:57.192 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:57.192 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.451 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.451 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.451 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.451 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.451 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.451 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:57.451 { 00:14:57.451 "cntlid": 105, 00:14:57.451 "qid": 0, 00:14:57.451 "state": "enabled", 00:14:57.451 "thread": "nvmf_tgt_poll_group_000", 00:14:57.451 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:57.451 "listen_address": { 00:14:57.451 "trtype": "TCP", 00:14:57.451 "adrfam": "IPv4", 00:14:57.452 "traddr": "10.0.0.2", 00:14:57.452 "trsvcid": "4420" 00:14:57.452 }, 00:14:57.452 "peer_address": { 00:14:57.452 "trtype": "TCP", 00:14:57.452 "adrfam": "IPv4", 00:14:57.452 "traddr": "10.0.0.1", 00:14:57.452 "trsvcid": "42232" 00:14:57.452 }, 00:14:57.452 "auth": { 00:14:57.452 "state": "completed", 00:14:57.452 "digest": "sha512", 00:14:57.452 "dhgroup": "ffdhe2048" 00:14:57.452 } 00:14:57.452 } 00:14:57.452 ]' 00:14:57.452 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:57.452 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:57.452 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:57.452 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:57.452 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:57.452 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.452 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.452 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.712 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:14:57.712 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:14:58.280 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.280 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:58.281 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.281 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.281 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.281 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:58.281 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:58.281 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:58.281 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:14:58.281 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:58.281 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:58.281 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:58.281 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:58.281 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.281 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:58.281 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.281 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.281 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.281 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:58.281 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:58.281 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:58.540 00:14:58.540 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:58.540 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:58.540 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.799 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.799 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.799 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.799 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.799 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.799 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:58.799 { 00:14:58.799 "cntlid": 107, 00:14:58.799 "qid": 0, 00:14:58.799 "state": "enabled", 00:14:58.799 "thread": "nvmf_tgt_poll_group_000", 00:14:58.799 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:58.799 "listen_address": { 00:14:58.799 "trtype": "TCP", 00:14:58.799 "adrfam": "IPv4", 00:14:58.799 "traddr": "10.0.0.2", 00:14:58.799 "trsvcid": "4420" 00:14:58.799 }, 00:14:58.799 "peer_address": { 00:14:58.799 "trtype": "TCP", 00:14:58.799 "adrfam": "IPv4", 00:14:58.799 "traddr": "10.0.0.1", 00:14:58.799 "trsvcid": "42260" 00:14:58.799 }, 00:14:58.799 "auth": { 00:14:58.799 "state": "completed", 00:14:58.800 "digest": "sha512", 00:14:58.800 "dhgroup": "ffdhe2048" 00:14:58.800 } 00:14:58.800 } 00:14:58.800 ]' 00:14:58.800 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:58.800 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:58.800 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:58.800 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:58.800 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:58.800 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.800 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.800 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.058 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: --dhchap-ctrl-secret DHHC-1:02:YThhYzg5ZDdkNjMyZDk3MWMwYWU3MWQ5MjUyYTgzZTBkYjdhOTg4NzgzOTI5ZGUxkSRd6A==: 00:14:59.058 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: --dhchap-ctrl-secret DHHC-1:02:YThhYzg5ZDdkNjMyZDk3MWMwYWU3MWQ5MjUyYTgzZTBkYjdhOTg4NzgzOTI5ZGUxkSRd6A==: 00:14:59.627 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.627 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:59.627 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.627 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.627 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.627 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:59.627 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:59.627 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:59.627 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:14:59.627 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:59.627 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:59.627 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:59.627 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:59.627 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.627 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:59.627 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.627 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.627 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.627 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:59.627 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:59.627 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:59.886 00:14:59.886 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:59.886 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:59.886 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.145 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.145 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.145 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.145 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.145 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.145 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:00.145 { 00:15:00.145 "cntlid": 109, 00:15:00.145 "qid": 0, 00:15:00.145 "state": "enabled", 00:15:00.145 "thread": "nvmf_tgt_poll_group_000", 00:15:00.145 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:00.145 "listen_address": { 00:15:00.145 "trtype": "TCP", 00:15:00.145 "adrfam": "IPv4", 00:15:00.145 "traddr": "10.0.0.2", 00:15:00.145 "trsvcid": "4420" 00:15:00.145 }, 00:15:00.145 "peer_address": { 00:15:00.145 "trtype": "TCP", 00:15:00.145 "adrfam": "IPv4", 00:15:00.145 "traddr": "10.0.0.1", 00:15:00.145 "trsvcid": "42292" 00:15:00.145 }, 00:15:00.145 "auth": { 00:15:00.145 "state": "completed", 00:15:00.145 "digest": "sha512", 00:15:00.145 "dhgroup": "ffdhe2048" 00:15:00.145 } 00:15:00.145 } 00:15:00.145 ]' 00:15:00.145 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:00.145 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:00.145 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:00.145 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:00.145 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:00.145 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.145 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.145 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.404 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:01:NGExOWVlZTk5YWMxZTQ2NTk3NzU3ZDA2ZjA2YjU3MWa7p1Hc: 00:15:00.404 13:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:01:NGExOWVlZTk5YWMxZTQ2NTk3NzU3ZDA2ZjA2YjU3MWa7p1Hc: 00:15:00.970 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.970 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:00.970 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.970 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.970 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.970 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:00.970 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:00.970 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:00.970 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:15:00.970 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:00.970 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:00.970 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:00.970 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:00.970 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.970 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:15:00.970 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.970 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.229 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.229 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:01.229 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:01.229 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:01.229 00:15:01.229 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:01.229 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.229 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:01.487 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.487 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.487 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.487 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.487 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.487 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:01.487 { 00:15:01.487 "cntlid": 111, 00:15:01.487 "qid": 0, 00:15:01.487 "state": "enabled", 00:15:01.487 "thread": "nvmf_tgt_poll_group_000", 00:15:01.487 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:01.487 "listen_address": { 00:15:01.487 "trtype": "TCP", 00:15:01.487 "adrfam": "IPv4", 00:15:01.487 "traddr": "10.0.0.2", 00:15:01.487 "trsvcid": "4420" 00:15:01.487 }, 00:15:01.487 "peer_address": { 00:15:01.487 "trtype": "TCP", 00:15:01.487 "adrfam": "IPv4", 00:15:01.487 "traddr": "10.0.0.1", 00:15:01.487 "trsvcid": "42316" 00:15:01.487 }, 00:15:01.487 "auth": { 00:15:01.487 "state": "completed", 00:15:01.487 "digest": "sha512", 00:15:01.487 "dhgroup": "ffdhe2048" 00:15:01.487 } 00:15:01.487 } 00:15:01.487 ]' 00:15:01.487 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:01.487 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:01.487 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:01.487 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:01.487 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:01.487 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.487 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.487 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.746 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:15:01.746 13:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:15:02.314 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.314 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:02.314 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.314 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.314 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.314 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:02.314 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:02.314 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:02.314 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:02.572 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:15:02.572 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:02.572 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:02.572 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:02.572 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:02.572 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.572 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:02.572 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.572 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.572 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.572 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:02.572 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:02.572 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:02.572 00:15:02.830 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:02.830 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:02.830 13:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.830 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.830 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.830 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.830 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.831 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.831 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:02.831 { 00:15:02.831 "cntlid": 113, 00:15:02.831 "qid": 0, 00:15:02.831 "state": "enabled", 00:15:02.831 "thread": "nvmf_tgt_poll_group_000", 00:15:02.831 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:02.831 "listen_address": { 00:15:02.831 "trtype": "TCP", 00:15:02.831 "adrfam": "IPv4", 00:15:02.831 "traddr": "10.0.0.2", 00:15:02.831 "trsvcid": "4420" 00:15:02.831 }, 00:15:02.831 "peer_address": { 00:15:02.831 "trtype": "TCP", 00:15:02.831 "adrfam": "IPv4", 00:15:02.831 "traddr": "10.0.0.1", 00:15:02.831 "trsvcid": "42346" 00:15:02.831 }, 00:15:02.831 "auth": { 00:15:02.831 "state": "completed", 00:15:02.831 "digest": "sha512", 00:15:02.831 "dhgroup": "ffdhe3072" 00:15:02.831 } 00:15:02.831 } 00:15:02.831 ]' 00:15:02.831 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:02.831 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:02.831 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:02.831 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:02.831 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:03.089 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.089 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.089 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.089 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:15:03.089 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:15:03.658 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.658 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:03.658 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.658 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.658 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.658 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:03.658 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:03.658 13:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:03.916 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:15:03.916 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:03.916 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:03.916 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:03.916 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:03.916 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.916 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.916 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.916 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.916 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.916 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.916 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.916 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.176 00:15:04.176 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:04.176 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:04.176 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.176 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.176 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.176 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.176 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.176 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.176 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:04.176 { 00:15:04.176 "cntlid": 115, 00:15:04.176 "qid": 0, 00:15:04.176 "state": "enabled", 00:15:04.176 "thread": "nvmf_tgt_poll_group_000", 00:15:04.176 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:04.176 "listen_address": { 00:15:04.176 "trtype": "TCP", 00:15:04.176 "adrfam": "IPv4", 00:15:04.176 "traddr": "10.0.0.2", 00:15:04.176 "trsvcid": "4420" 00:15:04.176 }, 00:15:04.176 "peer_address": { 00:15:04.176 "trtype": "TCP", 00:15:04.176 "adrfam": "IPv4", 00:15:04.176 "traddr": "10.0.0.1", 00:15:04.176 "trsvcid": "42372" 00:15:04.176 }, 00:15:04.176 "auth": { 00:15:04.176 "state": "completed", 00:15:04.176 "digest": "sha512", 00:15:04.176 "dhgroup": "ffdhe3072" 00:15:04.176 } 00:15:04.176 } 00:15:04.176 ]' 00:15:04.176 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:04.435 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:04.435 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:04.435 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:04.435 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:04.435 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.435 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.435 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.435 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: --dhchap-ctrl-secret DHHC-1:02:YThhYzg5ZDdkNjMyZDk3MWMwYWU3MWQ5MjUyYTgzZTBkYjdhOTg4NzgzOTI5ZGUxkSRd6A==: 00:15:04.435 13:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: --dhchap-ctrl-secret DHHC-1:02:YThhYzg5ZDdkNjMyZDk3MWMwYWU3MWQ5MjUyYTgzZTBkYjdhOTg4NzgzOTI5ZGUxkSRd6A==: 00:15:05.004 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.004 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:05.004 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.004 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.004 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.004 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:05.004 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:05.004 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:05.263 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:15:05.263 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:05.263 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:05.263 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:05.263 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:05.263 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.263 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.263 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.263 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.263 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.263 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.263 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.263 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.522 00:15:05.522 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:05.522 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.522 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:05.782 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.782 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.782 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.782 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.782 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.782 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:05.782 { 00:15:05.782 "cntlid": 117, 00:15:05.782 "qid": 0, 00:15:05.782 "state": "enabled", 00:15:05.782 "thread": "nvmf_tgt_poll_group_000", 00:15:05.782 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:05.782 "listen_address": { 00:15:05.782 "trtype": "TCP", 00:15:05.782 "adrfam": "IPv4", 00:15:05.782 "traddr": "10.0.0.2", 00:15:05.782 "trsvcid": "4420" 00:15:05.782 }, 00:15:05.782 "peer_address": { 00:15:05.782 "trtype": "TCP", 00:15:05.782 "adrfam": "IPv4", 00:15:05.782 "traddr": "10.0.0.1", 00:15:05.782 "trsvcid": "46738" 00:15:05.782 }, 00:15:05.782 "auth": { 00:15:05.782 "state": "completed", 00:15:05.782 "digest": "sha512", 00:15:05.782 "dhgroup": "ffdhe3072" 00:15:05.782 } 00:15:05.782 } 00:15:05.782 ]' 00:15:05.782 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:05.782 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:05.782 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:05.782 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:05.782 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:05.782 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.782 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.782 13:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.040 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:01:NGExOWVlZTk5YWMxZTQ2NTk3NzU3ZDA2ZjA2YjU3MWa7p1Hc: 00:15:06.040 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:01:NGExOWVlZTk5YWMxZTQ2NTk3NzU3ZDA2ZjA2YjU3MWa7p1Hc: 00:15:06.607 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.608 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:06.608 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.608 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.608 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.608 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:06.608 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:06.608 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:06.866 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:15:06.866 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:06.866 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:06.866 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:06.867 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:06.867 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.867 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:15:06.867 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.867 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.867 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.867 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:06.867 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:06.867 13:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:06.867 00:15:06.867 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:06.867 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:06.867 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.125 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.126 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.126 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.126 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.126 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.126 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:07.126 { 00:15:07.126 "cntlid": 119, 00:15:07.126 "qid": 0, 00:15:07.126 "state": "enabled", 00:15:07.126 "thread": "nvmf_tgt_poll_group_000", 00:15:07.126 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:07.126 "listen_address": { 00:15:07.126 "trtype": "TCP", 00:15:07.126 "adrfam": "IPv4", 00:15:07.126 "traddr": "10.0.0.2", 00:15:07.126 "trsvcid": "4420" 00:15:07.126 }, 00:15:07.126 "peer_address": { 00:15:07.126 "trtype": "TCP", 00:15:07.126 "adrfam": "IPv4", 00:15:07.126 "traddr": "10.0.0.1", 00:15:07.126 "trsvcid": "46784" 00:15:07.126 }, 00:15:07.126 "auth": { 00:15:07.126 "state": "completed", 00:15:07.126 "digest": "sha512", 00:15:07.126 "dhgroup": "ffdhe3072" 00:15:07.126 } 00:15:07.126 } 00:15:07.126 ]' 00:15:07.126 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:07.126 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:07.126 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:07.126 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:07.126 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:07.126 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.126 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.126 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.384 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:15:07.384 13:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:15:07.951 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.951 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:07.951 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.951 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.951 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.951 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:07.951 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:07.951 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:07.951 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:08.211 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:15:08.211 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.211 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:08.211 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:08.211 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:08.211 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.211 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.211 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.211 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.211 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.211 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.211 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.211 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.470 00:15:08.470 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:08.470 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:08.470 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.470 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.470 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.470 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.470 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.470 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.470 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:08.470 { 00:15:08.470 "cntlid": 121, 00:15:08.470 "qid": 0, 00:15:08.470 "state": "enabled", 00:15:08.470 "thread": "nvmf_tgt_poll_group_000", 00:15:08.470 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:08.470 "listen_address": { 00:15:08.470 "trtype": "TCP", 00:15:08.470 "adrfam": "IPv4", 00:15:08.470 "traddr": "10.0.0.2", 00:15:08.470 "trsvcid": "4420" 00:15:08.470 }, 00:15:08.470 "peer_address": { 00:15:08.470 "trtype": "TCP", 00:15:08.470 "adrfam": "IPv4", 00:15:08.470 "traddr": "10.0.0.1", 00:15:08.470 "trsvcid": "46806" 00:15:08.470 }, 00:15:08.470 "auth": { 00:15:08.470 "state": "completed", 00:15:08.470 "digest": "sha512", 00:15:08.470 "dhgroup": "ffdhe4096" 00:15:08.470 } 00:15:08.470 } 00:15:08.470 ]' 00:15:08.470 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:08.470 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:08.470 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:08.729 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:08.729 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:08.729 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.729 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.729 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.729 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:15:08.729 13:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:15:09.297 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.556 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:09.556 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.556 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.556 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.556 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:09.556 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:09.556 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:09.556 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:15:09.556 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:09.556 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:09.556 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:09.556 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:09.556 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.556 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.556 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.556 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.556 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.556 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.556 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.556 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.815 00:15:09.815 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:09.815 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:09.815 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.074 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.074 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.074 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.074 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.074 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.074 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:10.074 { 00:15:10.074 "cntlid": 123, 00:15:10.074 "qid": 0, 00:15:10.074 "state": "enabled", 00:15:10.074 "thread": "nvmf_tgt_poll_group_000", 00:15:10.074 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:10.074 "listen_address": { 00:15:10.074 "trtype": "TCP", 00:15:10.074 "adrfam": "IPv4", 00:15:10.074 "traddr": "10.0.0.2", 00:15:10.074 "trsvcid": "4420" 00:15:10.074 }, 00:15:10.074 "peer_address": { 00:15:10.074 "trtype": "TCP", 00:15:10.074 "adrfam": "IPv4", 00:15:10.074 "traddr": "10.0.0.1", 00:15:10.074 "trsvcid": "46834" 00:15:10.074 }, 00:15:10.074 "auth": { 00:15:10.074 "state": "completed", 00:15:10.074 "digest": "sha512", 00:15:10.074 "dhgroup": "ffdhe4096" 00:15:10.074 } 00:15:10.074 } 00:15:10.074 ]' 00:15:10.074 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:10.074 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:10.074 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:10.074 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:10.074 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:10.074 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.074 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.074 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.332 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: --dhchap-ctrl-secret DHHC-1:02:YThhYzg5ZDdkNjMyZDk3MWMwYWU3MWQ5MjUyYTgzZTBkYjdhOTg4NzgzOTI5ZGUxkSRd6A==: 00:15:10.332 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: --dhchap-ctrl-secret DHHC-1:02:YThhYzg5ZDdkNjMyZDk3MWMwYWU3MWQ5MjUyYTgzZTBkYjdhOTg4NzgzOTI5ZGUxkSRd6A==: 00:15:10.902 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.902 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:10.902 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.902 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.902 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.902 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.902 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:10.902 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:10.902 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:15:10.902 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:10.902 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:10.902 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:10.902 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:10.902 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.902 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.902 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.902 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.161 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.161 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.161 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.161 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.161 00:15:11.420 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:11.420 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:11.420 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.420 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.420 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.420 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.420 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.420 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.420 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:11.420 { 00:15:11.420 "cntlid": 125, 00:15:11.420 "qid": 0, 00:15:11.420 "state": "enabled", 00:15:11.420 "thread": "nvmf_tgt_poll_group_000", 00:15:11.420 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:11.420 "listen_address": { 00:15:11.420 "trtype": "TCP", 00:15:11.420 "adrfam": "IPv4", 00:15:11.420 "traddr": "10.0.0.2", 00:15:11.420 "trsvcid": "4420" 00:15:11.420 }, 00:15:11.420 "peer_address": { 00:15:11.420 "trtype": "TCP", 00:15:11.420 "adrfam": "IPv4", 00:15:11.420 "traddr": "10.0.0.1", 00:15:11.420 "trsvcid": "46872" 00:15:11.420 }, 00:15:11.420 "auth": { 00:15:11.420 "state": "completed", 00:15:11.420 "digest": "sha512", 00:15:11.420 "dhgroup": "ffdhe4096" 00:15:11.420 } 00:15:11.420 } 00:15:11.420 ]' 00:15:11.420 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:11.420 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:11.420 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:11.420 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:11.420 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:11.420 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.420 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.420 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.679 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:01:NGExOWVlZTk5YWMxZTQ2NTk3NzU3ZDA2ZjA2YjU3MWa7p1Hc: 00:15:11.679 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:01:NGExOWVlZTk5YWMxZTQ2NTk3NzU3ZDA2ZjA2YjU3MWa7p1Hc: 00:15:12.248 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.248 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:12.248 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.248 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.248 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.248 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:12.248 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:12.248 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:12.248 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:15:12.248 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:12.248 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:12.248 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:12.248 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:12.248 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.248 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:15:12.248 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.248 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.248 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.248 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:12.248 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:12.248 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:12.507 00:15:12.507 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:12.507 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:12.507 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.766 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.766 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.766 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.766 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.766 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.766 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:12.766 { 00:15:12.766 "cntlid": 127, 00:15:12.766 "qid": 0, 00:15:12.766 "state": "enabled", 00:15:12.766 "thread": "nvmf_tgt_poll_group_000", 00:15:12.766 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:12.766 "listen_address": { 00:15:12.766 "trtype": "TCP", 00:15:12.766 "adrfam": "IPv4", 00:15:12.766 "traddr": "10.0.0.2", 00:15:12.766 "trsvcid": "4420" 00:15:12.766 }, 00:15:12.766 "peer_address": { 00:15:12.766 "trtype": "TCP", 00:15:12.766 "adrfam": "IPv4", 00:15:12.766 "traddr": "10.0.0.1", 00:15:12.766 "trsvcid": "46898" 00:15:12.766 }, 00:15:12.766 "auth": { 00:15:12.766 "state": "completed", 00:15:12.766 "digest": "sha512", 00:15:12.766 "dhgroup": "ffdhe4096" 00:15:12.766 } 00:15:12.766 } 00:15:12.766 ]' 00:15:12.766 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:12.766 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:12.766 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:12.766 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:12.766 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:12.766 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.766 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.766 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.025 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:15:13.025 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:15:13.592 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.592 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:13.592 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.592 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.592 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.592 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:13.592 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.592 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:13.592 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:13.852 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:15:13.852 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.852 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:13.852 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:13.852 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:13.852 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.852 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.852 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.852 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.852 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.852 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.852 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.852 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.111 00:15:14.111 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:14.111 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:14.111 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.371 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.371 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.371 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.371 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.371 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.371 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.371 { 00:15:14.371 "cntlid": 129, 00:15:14.371 "qid": 0, 00:15:14.371 "state": "enabled", 00:15:14.371 "thread": "nvmf_tgt_poll_group_000", 00:15:14.371 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:14.371 "listen_address": { 00:15:14.371 "trtype": "TCP", 00:15:14.371 "adrfam": "IPv4", 00:15:14.371 "traddr": "10.0.0.2", 00:15:14.371 "trsvcid": "4420" 00:15:14.371 }, 00:15:14.371 "peer_address": { 00:15:14.371 "trtype": "TCP", 00:15:14.371 "adrfam": "IPv4", 00:15:14.371 "traddr": "10.0.0.1", 00:15:14.371 "trsvcid": "46928" 00:15:14.371 }, 00:15:14.371 "auth": { 00:15:14.371 "state": "completed", 00:15:14.371 "digest": "sha512", 00:15:14.371 "dhgroup": "ffdhe6144" 00:15:14.371 } 00:15:14.371 } 00:15:14.371 ]' 00:15:14.371 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.371 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:14.371 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.371 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:14.371 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.371 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.371 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.371 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.631 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:15:14.631 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:15:15.198 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.198 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:15.198 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.198 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.198 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.198 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.198 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:15.198 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:15.198 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:15:15.198 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.198 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:15.198 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:15.198 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:15.198 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.198 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.198 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.198 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.198 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.198 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.198 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.198 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.456 00:15:15.716 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:15.716 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.716 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:15.716 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.716 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.716 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.716 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.716 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.716 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:15.716 { 00:15:15.716 "cntlid": 131, 00:15:15.716 "qid": 0, 00:15:15.716 "state": "enabled", 00:15:15.716 "thread": "nvmf_tgt_poll_group_000", 00:15:15.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:15.716 "listen_address": { 00:15:15.716 "trtype": "TCP", 00:15:15.716 "adrfam": "IPv4", 00:15:15.716 "traddr": "10.0.0.2", 00:15:15.716 "trsvcid": "4420" 00:15:15.716 }, 00:15:15.716 "peer_address": { 00:15:15.716 "trtype": "TCP", 00:15:15.716 "adrfam": "IPv4", 00:15:15.716 "traddr": "10.0.0.1", 00:15:15.716 "trsvcid": "37760" 00:15:15.716 }, 00:15:15.716 "auth": { 00:15:15.716 "state": "completed", 00:15:15.716 "digest": "sha512", 00:15:15.716 "dhgroup": "ffdhe6144" 00:15:15.716 } 00:15:15.717 } 00:15:15.717 ]' 00:15:15.717 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:15.717 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:15.717 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:15.717 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:15.717 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:15.717 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.717 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.717 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.976 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: --dhchap-ctrl-secret DHHC-1:02:YThhYzg5ZDdkNjMyZDk3MWMwYWU3MWQ5MjUyYTgzZTBkYjdhOTg4NzgzOTI5ZGUxkSRd6A==: 00:15:15.976 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: --dhchap-ctrl-secret DHHC-1:02:YThhYzg5ZDdkNjMyZDk3MWMwYWU3MWQ5MjUyYTgzZTBkYjdhOTg4NzgzOTI5ZGUxkSRd6A==: 00:15:16.545 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.545 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:16.545 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.545 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.545 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.545 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:16.545 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:16.545 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:16.807 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:15:16.807 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:16.807 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:16.807 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:16.807 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:16.807 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.807 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.807 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.807 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.807 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.807 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.807 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.807 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.067 00:15:17.067 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.067 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.067 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.326 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.326 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.326 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.326 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.326 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.326 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.326 { 00:15:17.326 "cntlid": 133, 00:15:17.326 "qid": 0, 00:15:17.326 "state": "enabled", 00:15:17.326 "thread": "nvmf_tgt_poll_group_000", 00:15:17.326 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:17.326 "listen_address": { 00:15:17.326 "trtype": "TCP", 00:15:17.326 "adrfam": "IPv4", 00:15:17.326 "traddr": "10.0.0.2", 00:15:17.326 "trsvcid": "4420" 00:15:17.326 }, 00:15:17.326 "peer_address": { 00:15:17.326 "trtype": "TCP", 00:15:17.326 "adrfam": "IPv4", 00:15:17.327 "traddr": "10.0.0.1", 00:15:17.327 "trsvcid": "37794" 00:15:17.327 }, 00:15:17.327 "auth": { 00:15:17.327 "state": "completed", 00:15:17.327 "digest": "sha512", 00:15:17.327 "dhgroup": "ffdhe6144" 00:15:17.327 } 00:15:17.327 } 00:15:17.327 ]' 00:15:17.327 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.327 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:17.327 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:17.327 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:17.327 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.327 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.327 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.327 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.585 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:01:NGExOWVlZTk5YWMxZTQ2NTk3NzU3ZDA2ZjA2YjU3MWa7p1Hc: 00:15:17.586 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:01:NGExOWVlZTk5YWMxZTQ2NTk3NzU3ZDA2ZjA2YjU3MWa7p1Hc: 00:15:18.153 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.153 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:18.153 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.153 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.153 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.153 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:18.153 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:18.153 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:18.153 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:15:18.153 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:18.153 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:18.153 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:18.153 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:18.153 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.153 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:15:18.153 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.153 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.153 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.153 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:18.153 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:18.153 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:18.411 00:15:18.411 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:18.411 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:18.411 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.670 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.670 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.670 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.670 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.670 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.670 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.670 { 00:15:18.670 "cntlid": 135, 00:15:18.670 "qid": 0, 00:15:18.670 "state": "enabled", 00:15:18.670 "thread": "nvmf_tgt_poll_group_000", 00:15:18.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:18.670 "listen_address": { 00:15:18.670 "trtype": "TCP", 00:15:18.670 "adrfam": "IPv4", 00:15:18.670 "traddr": "10.0.0.2", 00:15:18.670 "trsvcid": "4420" 00:15:18.670 }, 00:15:18.670 "peer_address": { 00:15:18.670 "trtype": "TCP", 00:15:18.670 "adrfam": "IPv4", 00:15:18.670 "traddr": "10.0.0.1", 00:15:18.670 "trsvcid": "37804" 00:15:18.670 }, 00:15:18.670 "auth": { 00:15:18.670 "state": "completed", 00:15:18.670 "digest": "sha512", 00:15:18.670 "dhgroup": "ffdhe6144" 00:15:18.670 } 00:15:18.670 } 00:15:18.670 ]' 00:15:18.670 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.670 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:18.670 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.670 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:18.670 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.670 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.670 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.670 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.929 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:15:18.929 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:15:19.498 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.498 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:19.498 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.498 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.498 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.498 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:19.498 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:19.498 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:19.498 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:19.757 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:15:19.757 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.757 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:19.757 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:19.757 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:19.757 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.757 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.757 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.757 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.757 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.757 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.757 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.757 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.016 00:15:20.275 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.275 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.275 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.275 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.275 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.275 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.275 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.275 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.276 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.276 { 00:15:20.276 "cntlid": 137, 00:15:20.276 "qid": 0, 00:15:20.276 "state": "enabled", 00:15:20.276 "thread": "nvmf_tgt_poll_group_000", 00:15:20.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:20.276 "listen_address": { 00:15:20.276 "trtype": "TCP", 00:15:20.276 "adrfam": "IPv4", 00:15:20.276 "traddr": "10.0.0.2", 00:15:20.276 "trsvcid": "4420" 00:15:20.276 }, 00:15:20.276 "peer_address": { 00:15:20.276 "trtype": "TCP", 00:15:20.276 "adrfam": "IPv4", 00:15:20.276 "traddr": "10.0.0.1", 00:15:20.276 "trsvcid": "37844" 00:15:20.276 }, 00:15:20.276 "auth": { 00:15:20.276 "state": "completed", 00:15:20.276 "digest": "sha512", 00:15:20.276 "dhgroup": "ffdhe8192" 00:15:20.276 } 00:15:20.276 } 00:15:20.276 ]' 00:15:20.276 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.276 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:20.276 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.276 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:20.276 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.276 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.276 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.276 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.544 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:15:20.544 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:15:21.155 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.155 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:21.155 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.155 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.155 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.155 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.155 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:21.155 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:21.446 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:15:21.446 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.446 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:21.446 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:21.446 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:21.446 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.446 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.446 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.446 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.446 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.446 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.446 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.447 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.739 00:15:21.739 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:21.739 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.739 13:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.057 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.057 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.057 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.058 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.058 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.058 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.058 { 00:15:22.058 "cntlid": 139, 00:15:22.058 "qid": 0, 00:15:22.058 "state": "enabled", 00:15:22.058 "thread": "nvmf_tgt_poll_group_000", 00:15:22.058 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:22.058 "listen_address": { 00:15:22.058 "trtype": "TCP", 00:15:22.058 "adrfam": "IPv4", 00:15:22.058 "traddr": "10.0.0.2", 00:15:22.058 "trsvcid": "4420" 00:15:22.058 }, 00:15:22.058 "peer_address": { 00:15:22.058 "trtype": "TCP", 00:15:22.058 "adrfam": "IPv4", 00:15:22.058 "traddr": "10.0.0.1", 00:15:22.058 "trsvcid": "37878" 00:15:22.058 }, 00:15:22.058 "auth": { 00:15:22.058 "state": "completed", 00:15:22.058 "digest": "sha512", 00:15:22.058 "dhgroup": "ffdhe8192" 00:15:22.058 } 00:15:22.058 } 00:15:22.058 ]' 00:15:22.058 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.058 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:22.058 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.058 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:22.058 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.058 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.058 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.058 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.058 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: --dhchap-ctrl-secret DHHC-1:02:YThhYzg5ZDdkNjMyZDk3MWMwYWU3MWQ5MjUyYTgzZTBkYjdhOTg4NzgzOTI5ZGUxkSRd6A==: 00:15:22.058 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: --dhchap-ctrl-secret DHHC-1:02:YThhYzg5ZDdkNjMyZDk3MWMwYWU3MWQ5MjUyYTgzZTBkYjdhOTg4NzgzOTI5ZGUxkSRd6A==: 00:15:22.626 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.626 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:22.626 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.626 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.626 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.626 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:22.626 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:22.626 13:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:22.884 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:15:22.884 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:22.884 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:22.884 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:22.884 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:22.884 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.884 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.884 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.884 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.884 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.884 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.884 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.884 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.449 00:15:23.449 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:23.449 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.449 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:23.449 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.449 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.449 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.449 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.449 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.449 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:23.449 { 00:15:23.449 "cntlid": 141, 00:15:23.449 "qid": 0, 00:15:23.449 "state": "enabled", 00:15:23.449 "thread": "nvmf_tgt_poll_group_000", 00:15:23.449 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:23.449 "listen_address": { 00:15:23.449 "trtype": "TCP", 00:15:23.449 "adrfam": "IPv4", 00:15:23.449 "traddr": "10.0.0.2", 00:15:23.449 "trsvcid": "4420" 00:15:23.449 }, 00:15:23.449 "peer_address": { 00:15:23.449 "trtype": "TCP", 00:15:23.449 "adrfam": "IPv4", 00:15:23.449 "traddr": "10.0.0.1", 00:15:23.449 "trsvcid": "37896" 00:15:23.449 }, 00:15:23.449 "auth": { 00:15:23.449 "state": "completed", 00:15:23.449 "digest": "sha512", 00:15:23.449 "dhgroup": "ffdhe8192" 00:15:23.449 } 00:15:23.449 } 00:15:23.449 ]' 00:15:23.449 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:23.449 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:23.449 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:23.708 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:23.708 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:23.708 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.708 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.708 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.709 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:01:NGExOWVlZTk5YWMxZTQ2NTk3NzU3ZDA2ZjA2YjU3MWa7p1Hc: 00:15:23.709 13:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:01:NGExOWVlZTk5YWMxZTQ2NTk3NzU3ZDA2ZjA2YjU3MWa7p1Hc: 00:15:24.276 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.276 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:24.276 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.276 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.276 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.276 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:24.276 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:24.276 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:24.534 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:15:24.534 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:24.534 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:24.534 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:24.534 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:24.534 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.534 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:15:24.534 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.534 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.534 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.534 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:24.534 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:24.534 13:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:25.100 00:15:25.100 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:25.100 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.100 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:25.100 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.100 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.101 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.101 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.101 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.101 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:25.101 { 00:15:25.101 "cntlid": 143, 00:15:25.101 "qid": 0, 00:15:25.101 "state": "enabled", 00:15:25.101 "thread": "nvmf_tgt_poll_group_000", 00:15:25.101 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:25.101 "listen_address": { 00:15:25.101 "trtype": "TCP", 00:15:25.101 "adrfam": "IPv4", 00:15:25.101 "traddr": "10.0.0.2", 00:15:25.101 "trsvcid": "4420" 00:15:25.101 }, 00:15:25.101 "peer_address": { 00:15:25.101 "trtype": "TCP", 00:15:25.101 "adrfam": "IPv4", 00:15:25.101 "traddr": "10.0.0.1", 00:15:25.101 "trsvcid": "37922" 00:15:25.101 }, 00:15:25.101 "auth": { 00:15:25.101 "state": "completed", 00:15:25.101 "digest": "sha512", 00:15:25.101 "dhgroup": "ffdhe8192" 00:15:25.101 } 00:15:25.101 } 00:15:25.101 ]' 00:15:25.101 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:25.101 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:25.101 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:25.101 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:25.101 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:25.359 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.359 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.359 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.359 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:15:25.359 13:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:15:25.926 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.926 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:25.926 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.926 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.926 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.926 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:25.926 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:15:25.926 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:25.926 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:25.926 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:25.926 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:26.185 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:15:26.185 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.185 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:26.185 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:26.185 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:26.185 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.185 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.185 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.185 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.185 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.185 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.185 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.185 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.752 00:15:26.752 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.752 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.752 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.752 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.752 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.752 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.752 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.752 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.752 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.752 { 00:15:26.752 "cntlid": 145, 00:15:26.752 "qid": 0, 00:15:26.752 "state": "enabled", 00:15:26.752 "thread": "nvmf_tgt_poll_group_000", 00:15:26.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:26.752 "listen_address": { 00:15:26.752 "trtype": "TCP", 00:15:26.752 "adrfam": "IPv4", 00:15:26.752 "traddr": "10.0.0.2", 00:15:26.752 "trsvcid": "4420" 00:15:26.752 }, 00:15:26.752 "peer_address": { 00:15:26.752 "trtype": "TCP", 00:15:26.752 "adrfam": "IPv4", 00:15:26.752 "traddr": "10.0.0.1", 00:15:26.752 "trsvcid": "37722" 00:15:26.752 }, 00:15:26.752 "auth": { 00:15:26.752 "state": "completed", 00:15:26.752 "digest": "sha512", 00:15:26.752 "dhgroup": "ffdhe8192" 00:15:26.752 } 00:15:26.752 } 00:15:26.752 ]' 00:15:26.752 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.752 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:26.752 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.752 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:26.752 13:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.752 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.752 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.752 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.011 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:15:27.011 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:YmFiZDQyOTgyZWI0OWQ4NWVkOTMwOTI1M2MzMWJlMTFiZmUzN2VmYjg0ZjVlOTgy6H1kpQ==: --dhchap-ctrl-secret DHHC-1:03:YzY1MmViYTEyZDZiMzg1YmE1MGQ1YzgyYzViNGY3OTE3NzRkYWQwMWVlOTVmMGJhN2QwYWNkNTA0YmE4NTQxZKLGT40=: 00:15:27.578 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.578 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:27.578 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.578 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.836 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.836 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 00:15:27.836 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.836 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.836 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.836 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:15:27.836 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:27.836 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:15:27.836 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:27.837 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:27.837 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:27.837 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:27.837 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:15:27.837 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:27.837 13:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:28.095 request: 00:15:28.095 { 00:15:28.095 "name": "nvme0", 00:15:28.095 "trtype": "tcp", 00:15:28.095 "traddr": "10.0.0.2", 00:15:28.095 "adrfam": "ipv4", 00:15:28.095 "trsvcid": "4420", 00:15:28.095 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:28.095 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:28.095 "prchk_reftag": false, 00:15:28.095 "prchk_guard": false, 00:15:28.095 "hdgst": false, 00:15:28.095 "ddgst": false, 00:15:28.095 "dhchap_key": "key2", 00:15:28.095 "allow_unrecognized_csi": false, 00:15:28.095 "method": "bdev_nvme_attach_controller", 00:15:28.095 "req_id": 1 00:15:28.095 } 00:15:28.095 Got JSON-RPC error response 00:15:28.095 response: 00:15:28.095 { 00:15:28.095 "code": -5, 00:15:28.095 "message": "Input/output error" 00:15:28.095 } 00:15:28.095 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:28.095 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:28.095 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:28.095 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:28.095 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:28.095 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.095 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.095 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.095 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.095 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.095 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.095 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.095 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:28.095 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:28.095 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:28.095 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:28.095 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:28.095 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:28.095 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:28.095 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:28.095 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:28.095 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:28.662 request: 00:15:28.662 { 00:15:28.662 "name": "nvme0", 00:15:28.662 "trtype": "tcp", 00:15:28.662 "traddr": "10.0.0.2", 00:15:28.662 "adrfam": "ipv4", 00:15:28.662 "trsvcid": "4420", 00:15:28.662 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:28.662 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:28.662 "prchk_reftag": false, 00:15:28.662 "prchk_guard": false, 00:15:28.662 "hdgst": false, 00:15:28.662 "ddgst": false, 00:15:28.662 "dhchap_key": "key1", 00:15:28.662 "dhchap_ctrlr_key": "ckey2", 00:15:28.662 "allow_unrecognized_csi": false, 00:15:28.662 "method": "bdev_nvme_attach_controller", 00:15:28.662 "req_id": 1 00:15:28.662 } 00:15:28.662 Got JSON-RPC error response 00:15:28.662 response: 00:15:28.662 { 00:15:28.662 "code": -5, 00:15:28.662 "message": "Input/output error" 00:15:28.662 } 00:15:28.662 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:28.662 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:28.662 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:28.662 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:28.662 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:28.662 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.662 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.662 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.662 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 00:15:28.662 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.662 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.662 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.662 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.662 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:28.662 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.662 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:28.662 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:28.662 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:28.662 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:28.662 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.662 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.662 13:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.921 request: 00:15:28.921 { 00:15:28.921 "name": "nvme0", 00:15:28.921 "trtype": "tcp", 00:15:28.921 "traddr": "10.0.0.2", 00:15:28.921 "adrfam": "ipv4", 00:15:28.921 "trsvcid": "4420", 00:15:28.921 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:28.921 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:28.921 "prchk_reftag": false, 00:15:28.921 "prchk_guard": false, 00:15:28.921 "hdgst": false, 00:15:28.921 "ddgst": false, 00:15:28.921 "dhchap_key": "key1", 00:15:28.921 "dhchap_ctrlr_key": "ckey1", 00:15:28.921 "allow_unrecognized_csi": false, 00:15:28.921 "method": "bdev_nvme_attach_controller", 00:15:28.921 "req_id": 1 00:15:28.921 } 00:15:28.921 Got JSON-RPC error response 00:15:28.921 response: 00:15:28.921 { 00:15:28.921 "code": -5, 00:15:28.921 "message": "Input/output error" 00:15:28.921 } 00:15:28.921 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:28.921 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:28.921 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:28.921 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:28.921 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:28.921 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.921 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.921 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.921 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 822303 00:15:28.921 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 822303 ']' 00:15:28.921 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 822303 00:15:28.921 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:15:28.921 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:28.921 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 822303 00:15:29.179 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:29.179 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:29.179 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 822303' 00:15:29.179 killing process with pid 822303 00:15:29.179 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 822303 00:15:29.179 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 822303 00:15:29.179 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:15:29.179 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:29.179 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:29.179 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.179 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=847927 00:15:29.179 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 847927 00:15:29.179 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 847927 ']' 00:15:29.179 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.179 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:29.179 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.179 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:29.179 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.179 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:15:29.438 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:29.438 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:15:29.438 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:29.439 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:29.439 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.439 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:29.439 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:29.439 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 847927 00:15:29.439 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 847927 ']' 00:15:29.439 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.439 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:29.439 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.439 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:29.439 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.439 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:29.439 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:15:29.439 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:15:29.439 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.439 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.698 null0 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.SJM 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.O57 ]] 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.O57 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.MVa 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.iHE ]] 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iHE 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Pv6 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Fw1 ]] 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Fw1 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.WUo 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:29.698 13:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:30.633 nvme0n1 00:15:30.633 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.633 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.633 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.633 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.633 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.633 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.633 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.633 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.633 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:30.633 { 00:15:30.633 "cntlid": 1, 00:15:30.633 "qid": 0, 00:15:30.633 "state": "enabled", 00:15:30.633 "thread": "nvmf_tgt_poll_group_000", 00:15:30.633 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:30.633 "listen_address": { 00:15:30.633 "trtype": "TCP", 00:15:30.633 "adrfam": "IPv4", 00:15:30.633 "traddr": "10.0.0.2", 00:15:30.633 "trsvcid": "4420" 00:15:30.633 }, 00:15:30.633 "peer_address": { 00:15:30.633 "trtype": "TCP", 00:15:30.633 "adrfam": "IPv4", 00:15:30.633 "traddr": "10.0.0.1", 00:15:30.633 "trsvcid": "37778" 00:15:30.633 }, 00:15:30.633 "auth": { 00:15:30.633 "state": "completed", 00:15:30.633 "digest": "sha512", 00:15:30.634 "dhgroup": "ffdhe8192" 00:15:30.634 } 00:15:30.634 } 00:15:30.634 ]' 00:15:30.634 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:30.634 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:30.634 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.634 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:30.634 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:30.634 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.634 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.634 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.892 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:15:30.892 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:15:31.459 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.459 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:31.459 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.459 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.459 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.459 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:15:31.459 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.459 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.459 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.459 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:15:31.459 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:15:31.718 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:31.718 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:31.718 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:31.718 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:31.718 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:31.718 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:31.718 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:31.718 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:31.718 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:31.718 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:31.718 request: 00:15:31.718 { 00:15:31.718 "name": "nvme0", 00:15:31.718 "trtype": "tcp", 00:15:31.718 "traddr": "10.0.0.2", 00:15:31.718 "adrfam": "ipv4", 00:15:31.718 "trsvcid": "4420", 00:15:31.718 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:31.718 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:31.718 "prchk_reftag": false, 00:15:31.718 "prchk_guard": false, 00:15:31.718 "hdgst": false, 00:15:31.718 "ddgst": false, 00:15:31.718 "dhchap_key": "key3", 00:15:31.718 "allow_unrecognized_csi": false, 00:15:31.718 "method": "bdev_nvme_attach_controller", 00:15:31.718 "req_id": 1 00:15:31.718 } 00:15:31.718 Got JSON-RPC error response 00:15:31.718 response: 00:15:31.718 { 00:15:31.718 "code": -5, 00:15:31.718 "message": "Input/output error" 00:15:31.718 } 00:15:31.718 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:31.718 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:31.718 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:31.718 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:31.718 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:15:31.718 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:15:31.718 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:31.718 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:31.977 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:31.977 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:31.977 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:31.977 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:31.977 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:31.977 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:31.977 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:31.977 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:31.977 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:31.977 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:32.235 request: 00:15:32.235 { 00:15:32.235 "name": "nvme0", 00:15:32.235 "trtype": "tcp", 00:15:32.235 "traddr": "10.0.0.2", 00:15:32.235 "adrfam": "ipv4", 00:15:32.235 "trsvcid": "4420", 00:15:32.235 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:32.235 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:32.235 "prchk_reftag": false, 00:15:32.235 "prchk_guard": false, 00:15:32.235 "hdgst": false, 00:15:32.235 "ddgst": false, 00:15:32.235 "dhchap_key": "key3", 00:15:32.235 "allow_unrecognized_csi": false, 00:15:32.235 "method": "bdev_nvme_attach_controller", 00:15:32.235 "req_id": 1 00:15:32.235 } 00:15:32.235 Got JSON-RPC error response 00:15:32.235 response: 00:15:32.235 { 00:15:32.235 "code": -5, 00:15:32.235 "message": "Input/output error" 00:15:32.235 } 00:15:32.235 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:32.235 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:32.235 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:32.235 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:32.235 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:32.235 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:15:32.235 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:32.235 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:32.235 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:32.235 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:32.235 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:32.235 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.235 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.235 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.235 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:32.235 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.235 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.235 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.235 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:32.235 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:32.235 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:32.235 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:32.235 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.235 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:32.235 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.235 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:32.235 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:32.235 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:32.494 request: 00:15:32.494 { 00:15:32.494 "name": "nvme0", 00:15:32.494 "trtype": "tcp", 00:15:32.494 "traddr": "10.0.0.2", 00:15:32.494 "adrfam": "ipv4", 00:15:32.494 "trsvcid": "4420", 00:15:32.494 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:32.494 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:32.494 "prchk_reftag": false, 00:15:32.494 "prchk_guard": false, 00:15:32.494 "hdgst": false, 00:15:32.494 "ddgst": false, 00:15:32.494 "dhchap_key": "key0", 00:15:32.494 "dhchap_ctrlr_key": "key1", 00:15:32.494 "allow_unrecognized_csi": false, 00:15:32.494 "method": "bdev_nvme_attach_controller", 00:15:32.494 "req_id": 1 00:15:32.494 } 00:15:32.494 Got JSON-RPC error response 00:15:32.494 response: 00:15:32.494 { 00:15:32.494 "code": -5, 00:15:32.494 "message": "Input/output error" 00:15:32.494 } 00:15:32.494 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:32.494 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:32.494 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:32.494 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:32.494 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:15:32.494 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:32.494 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:32.753 nvme0n1 00:15:32.753 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:15:32.753 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:15:32.753 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.011 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.011 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.011 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.270 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 00:15:33.270 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.270 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.270 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.270 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:33.270 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:33.270 13:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:33.837 nvme0n1 00:15:33.837 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:15:33.837 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.837 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:15:34.095 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.095 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:34.095 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.095 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.095 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.095 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:15:34.095 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:15:34.095 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.095 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.095 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:15:34.095 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: --dhchap-ctrl-secret DHHC-1:03:MDJlZmY2MzFlZjU1ZGUxNTVlZDk0YjM0MDAxNGYzZWFkYWU1MGYzZGU0ZDA0ZjZiYjYxMTNjZjJhNTE5NmExZpbLWps=: 00:15:34.663 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:15:34.663 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:15:34.663 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:15:34.663 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:15:34.663 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:15:34.663 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:15:34.663 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:15:34.663 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.663 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.923 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:15:34.923 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:34.923 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:15:34.923 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:34.923 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:34.923 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:34.923 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:34.923 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:34.923 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:34.923 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:35.492 request: 00:15:35.492 { 00:15:35.492 "name": "nvme0", 00:15:35.492 "trtype": "tcp", 00:15:35.492 "traddr": "10.0.0.2", 00:15:35.492 "adrfam": "ipv4", 00:15:35.493 "trsvcid": "4420", 00:15:35.493 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:35.493 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:35.493 "prchk_reftag": false, 00:15:35.493 "prchk_guard": false, 00:15:35.493 "hdgst": false, 00:15:35.493 "ddgst": false, 00:15:35.493 "dhchap_key": "key1", 00:15:35.493 "allow_unrecognized_csi": false, 00:15:35.493 "method": "bdev_nvme_attach_controller", 00:15:35.493 "req_id": 1 00:15:35.493 } 00:15:35.493 Got JSON-RPC error response 00:15:35.493 response: 00:15:35.493 { 00:15:35.493 "code": -5, 00:15:35.493 "message": "Input/output error" 00:15:35.493 } 00:15:35.493 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:35.493 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:35.493 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:35.493 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:35.493 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:35.493 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:35.493 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:36.061 nvme0n1 00:15:36.061 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:15:36.061 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.061 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:15:36.321 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.321 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.321 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.321 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:36.321 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.321 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.321 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.321 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:15:36.321 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:36.321 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:36.581 nvme0n1 00:15:36.581 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:15:36.581 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.581 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:15:36.841 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.841 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.841 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.841 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:36.841 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.841 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.841 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.841 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: '' 2s 00:15:36.841 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:36.841 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:36.841 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: 00:15:36.841 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:15:36.841 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:36.841 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:36.841 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: ]] 00:15:36.841 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZjMwOTBmOTI4NzM3OTcwOWMwNjMxNTRhMTZkYTM4YmY/Ci0T: 00:15:36.841 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:15:36.841 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:36.841 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:39.379 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:15:39.379 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:15:39.379 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:15:39.379 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:15:39.379 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:15:39.379 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:15:39.379 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:15:39.379 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key key2 00:15:39.379 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.379 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.379 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.379 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: 2s 00:15:39.379 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:39.379 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:39.379 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:15:39.379 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: 00:15:39.379 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:39.379 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:39.379 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:15:39.379 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: ]] 00:15:39.379 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YmQwNTAwNjA0ZGYzNDBhOTgxY2Y4MmQwZjc5MzgzNTgwNzJjNGZmMTc3Y2IxYTg3VrySUA==: 00:15:39.379 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:39.379 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:41.286 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:15:41.286 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:15:41.286 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:15:41.286 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:15:41.286 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:15:41.286 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:15:41.286 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:15:41.286 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.286 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:41.286 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.286 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.286 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.286 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:41.286 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:41.286 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:41.855 nvme0n1 00:15:41.855 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:41.855 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.855 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.855 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.855 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:41.855 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:42.113 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:15:42.113 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:15:42.113 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.372 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.372 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:42.372 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.372 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.372 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.372 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:15:42.372 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:15:42.630 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:15:42.630 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.630 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:15:42.630 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.630 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:42.630 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.630 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.630 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.630 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:42.630 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:42.630 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:42.630 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:15:42.630 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:42.630 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:15:42.630 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:42.630 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:42.630 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:43.196 request: 00:15:43.196 { 00:15:43.196 "name": "nvme0", 00:15:43.196 "dhchap_key": "key1", 00:15:43.196 "dhchap_ctrlr_key": "key3", 00:15:43.196 "method": "bdev_nvme_set_keys", 00:15:43.196 "req_id": 1 00:15:43.196 } 00:15:43.196 Got JSON-RPC error response 00:15:43.196 response: 00:15:43.196 { 00:15:43.196 "code": -13, 00:15:43.196 "message": "Permission denied" 00:15:43.196 } 00:15:43.196 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:43.196 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:43.196 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:43.196 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:43.196 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:43.196 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:43.196 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.196 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:15:43.196 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:15:44.571 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:44.571 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:44.572 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.572 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:15:44.572 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:44.572 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.572 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.572 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.572 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:44.572 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:44.572 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:45.138 nvme0n1 00:15:45.138 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:45.138 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.138 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.138 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.138 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:45.138 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:45.138 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:45.138 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:15:45.138 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:45.138 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:15:45.138 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:45.139 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:45.139 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:45.704 request: 00:15:45.704 { 00:15:45.704 "name": "nvme0", 00:15:45.704 "dhchap_key": "key2", 00:15:45.704 "dhchap_ctrlr_key": "key0", 00:15:45.704 "method": "bdev_nvme_set_keys", 00:15:45.704 "req_id": 1 00:15:45.704 } 00:15:45.704 Got JSON-RPC error response 00:15:45.704 response: 00:15:45.704 { 00:15:45.704 "code": -13, 00:15:45.704 "message": "Permission denied" 00:15:45.704 } 00:15:45.704 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:45.704 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:45.705 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:45.705 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:45.705 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:45.705 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:45.705 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.705 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:15:45.705 13:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:15:47.079 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:47.079 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:47.079 13:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.079 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:15:47.079 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:15:47.079 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:15:47.079 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 822328 00:15:47.079 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 822328 ']' 00:15:47.079 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 822328 00:15:47.079 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:15:47.079 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:47.079 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 822328 00:15:47.079 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:15:47.079 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:15:47.079 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 822328' 00:15:47.079 killing process with pid 822328 00:15:47.079 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 822328 00:15:47.079 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 822328 00:15:47.079 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:15:47.079 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:47.079 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:15:47.338 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:47.338 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:15:47.338 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:47.338 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:47.338 rmmod nvme_tcp 00:15:47.338 rmmod nvme_fabrics 00:15:47.338 rmmod nvme_keyring 00:15:47.338 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:47.338 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:15:47.338 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:15:47.338 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 847927 ']' 00:15:47.338 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 847927 00:15:47.338 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 847927 ']' 00:15:47.338 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 847927 00:15:47.338 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:15:47.338 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:47.338 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 847927 00:15:47.338 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:47.338 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:47.338 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 847927' 00:15:47.338 killing process with pid 847927 00:15:47.338 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 847927 00:15:47.338 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 847927 00:15:47.338 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:47.338 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:47.338 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:47.338 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:15:47.338 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:15:47.338 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:47.338 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:15:47.338 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:47.338 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:47.338 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.338 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:47.338 13:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.869 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:49.869 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.SJM /tmp/spdk.key-sha256.MVa /tmp/spdk.key-sha384.Pv6 /tmp/spdk.key-sha512.WUo /tmp/spdk.key-sha512.O57 /tmp/spdk.key-sha384.iHE /tmp/spdk.key-sha256.Fw1 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:15:49.869 00:15:49.869 real 2m17.527s 00:15:49.869 user 5m9.268s 00:15:49.869 sys 0m16.917s 00:15:49.869 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:49.869 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.869 ************************************ 00:15:49.869 END TEST nvmf_auth_target 00:15:49.869 ************************************ 00:15:49.869 13:59:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:15:49.869 13:59:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:49.869 13:59:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:15:49.869 13:59:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:49.869 13:59:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:49.869 ************************************ 00:15:49.869 START TEST nvmf_bdevio_no_huge 00:15:49.869 ************************************ 00:15:49.869 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:49.869 * Looking for test storage... 00:15:49.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:49.869 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:49.869 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:15:49.869 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:49.869 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:49.869 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:49.869 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:49.869 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:49.869 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:15:49.869 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:15:49.869 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:15:49.869 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:15:49.869 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:15:49.869 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:15:49.869 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:15:49.869 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:49.869 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:15:49.869 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:15:49.869 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:49.869 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:49.869 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:15:49.869 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:49.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.870 --rc genhtml_branch_coverage=1 00:15:49.870 --rc genhtml_function_coverage=1 00:15:49.870 --rc genhtml_legend=1 00:15:49.870 --rc geninfo_all_blocks=1 00:15:49.870 --rc geninfo_unexecuted_blocks=1 00:15:49.870 00:15:49.870 ' 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:49.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.870 --rc genhtml_branch_coverage=1 00:15:49.870 --rc genhtml_function_coverage=1 00:15:49.870 --rc genhtml_legend=1 00:15:49.870 --rc geninfo_all_blocks=1 00:15:49.870 --rc geninfo_unexecuted_blocks=1 00:15:49.870 00:15:49.870 ' 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:49.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.870 --rc genhtml_branch_coverage=1 00:15:49.870 --rc genhtml_function_coverage=1 00:15:49.870 --rc genhtml_legend=1 00:15:49.870 --rc geninfo_all_blocks=1 00:15:49.870 --rc geninfo_unexecuted_blocks=1 00:15:49.870 00:15:49.870 ' 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:49.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.870 --rc genhtml_branch_coverage=1 00:15:49.870 --rc genhtml_function_coverage=1 00:15:49.870 --rc genhtml_legend=1 00:15:49.870 --rc geninfo_all_blocks=1 00:15:49.870 --rc geninfo_unexecuted_blocks=1 00:15:49.870 00:15:49.870 ' 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:49.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:15:49.870 13:59:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:55.222 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:55.222 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:15:55.222 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:55.222 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:55.222 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:55.222 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:55.222 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:55.222 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:15:55.222 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:55.222 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:15:55.222 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:15:55.222 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:15:55.222 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:15:55.222 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:15:55.222 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:15:55.222 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:55.222 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:55.222 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:55.222 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:55.222 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:55.222 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:55.222 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:55.222 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:55.222 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:55.223 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:55.223 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:55.223 Found net devices under 0000:31:00.0: cvl_0_0 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:55.223 Found net devices under 0000:31:00.1: cvl_0_1 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:55.223 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:55.223 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:55.223 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:55.223 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:55.223 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:55.223 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:55.223 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:55.223 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:55.223 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:55.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:55.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.585 ms 00:15:55.223 00:15:55.223 --- 10.0.0.2 ping statistics --- 00:15:55.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.223 rtt min/avg/max/mdev = 0.585/0.585/0.585/0.000 ms 00:15:55.223 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:55.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:55.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:15:55.223 00:15:55.223 --- 10.0.0.1 ping statistics --- 00:15:55.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.223 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:15:55.223 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:55.223 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:15:55.223 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:55.223 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:55.223 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:55.223 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:55.223 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:55.223 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:55.223 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:55.223 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:55.223 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:55.223 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:55.223 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:55.223 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=856410 00:15:55.223 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 856410 00:15:55.223 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:15:55.223 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 856410 ']' 00:15:55.223 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.223 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:55.224 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.224 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:55.224 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:55.224 [2024-11-06 13:59:34.248796] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:15:55.224 [2024-11-06 13:59:34.248852] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:15:55.224 [2024-11-06 13:59:34.340966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:55.224 [2024-11-06 13:59:34.392021] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:55.224 [2024-11-06 13:59:34.392046] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:55.224 [2024-11-06 13:59:34.392053] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:55.224 [2024-11-06 13:59:34.392062] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:55.224 [2024-11-06 13:59:34.392068] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:55.224 [2024-11-06 13:59:34.393240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:15:55.224 [2024-11-06 13:59:34.393390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:15:55.224 [2024-11-06 13:59:34.393651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:55.224 [2024-11-06 13:59:34.393651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:15:55.793 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:55.793 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:15:55.793 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:55.793 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:55.793 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:55.793 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:55.793 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:55.793 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.793 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:55.793 [2024-11-06 13:59:35.059566] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:55.793 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.793 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:55.793 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.793 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:55.793 Malloc0 00:15:56.053 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.053 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:56.053 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.053 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:56.053 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.053 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:56.053 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.053 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:56.053 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.053 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:56.053 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.053 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:56.053 [2024-11-06 13:59:35.096555] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:56.053 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.053 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:15:56.053 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:56.053 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:15:56.053 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:15:56.053 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:56.053 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:56.053 { 00:15:56.053 "params": { 00:15:56.053 "name": "Nvme$subsystem", 00:15:56.053 "trtype": "$TEST_TRANSPORT", 00:15:56.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:56.053 "adrfam": "ipv4", 00:15:56.053 "trsvcid": "$NVMF_PORT", 00:15:56.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:56.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:56.053 "hdgst": ${hdgst:-false}, 00:15:56.053 "ddgst": ${ddgst:-false} 00:15:56.053 }, 00:15:56.053 "method": "bdev_nvme_attach_controller" 00:15:56.053 } 00:15:56.053 EOF 00:15:56.053 )") 00:15:56.053 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:15:56.053 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:15:56.053 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:15:56.053 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:15:56.053 "params": { 00:15:56.053 "name": "Nvme1", 00:15:56.053 "trtype": "tcp", 00:15:56.053 "traddr": "10.0.0.2", 00:15:56.053 "adrfam": "ipv4", 00:15:56.053 "trsvcid": "4420", 00:15:56.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:56.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:56.053 "hdgst": false, 00:15:56.053 "ddgst": false 00:15:56.053 }, 00:15:56.053 "method": "bdev_nvme_attach_controller" 00:15:56.053 }' 00:15:56.053 [2024-11-06 13:59:35.135888] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:15:56.053 [2024-11-06 13:59:35.135952] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid856612 ] 00:15:56.053 [2024-11-06 13:59:35.226660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:56.053 [2024-11-06 13:59:35.287018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.053 [2024-11-06 13:59:35.287184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.053 [2024-11-06 13:59:35.287184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:56.314 I/O targets: 00:15:56.314 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:56.314 00:15:56.314 00:15:56.314 CUnit - A unit testing framework for C - Version 2.1-3 00:15:56.314 http://cunit.sourceforge.net/ 00:15:56.314 00:15:56.314 00:15:56.314 Suite: bdevio tests on: Nvme1n1 00:15:56.314 Test: blockdev write read block ...passed 00:15:56.314 Test: blockdev write zeroes read block ...passed 00:15:56.573 Test: blockdev write zeroes read no split ...passed 00:15:56.573 Test: blockdev write zeroes read split ...passed 00:15:56.573 Test: blockdev write zeroes read split partial ...passed 00:15:56.573 Test: blockdev reset ...[2024-11-06 13:59:35.668528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:15:56.573 [2024-11-06 13:59:35.668595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254afb0 (9): Bad file descriptor 00:15:56.573 [2024-11-06 13:59:35.802302] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:15:56.573 passed 00:15:56.573 Test: blockdev write read 8 blocks ...passed 00:15:56.573 Test: blockdev write read size > 128k ...passed 00:15:56.574 Test: blockdev write read invalid size ...passed 00:15:56.574 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:56.574 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:56.574 Test: blockdev write read max offset ...passed 00:15:56.834 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:56.834 Test: blockdev writev readv 8 blocks ...passed 00:15:56.834 Test: blockdev writev readv 30 x 1block ...passed 00:15:56.834 Test: blockdev writev readv block ...passed 00:15:56.834 Test: blockdev writev readv size > 128k ...passed 00:15:56.834 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:56.834 Test: blockdev comparev and writev ...[2024-11-06 13:59:36.024286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:56.834 [2024-11-06 13:59:36.024321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:56.834 [2024-11-06 13:59:36.024337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:56.834 [2024-11-06 13:59:36.024345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:56.834 [2024-11-06 13:59:36.024741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:56.834 [2024-11-06 13:59:36.024753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:56.834 [2024-11-06 13:59:36.024766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:56.834 [2024-11-06 13:59:36.024774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:56.834 [2024-11-06 13:59:36.025197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:56.834 [2024-11-06 13:59:36.025208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:56.834 [2024-11-06 13:59:36.025221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:56.834 [2024-11-06 13:59:36.025229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:56.834 [2024-11-06 13:59:36.025586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:56.834 [2024-11-06 13:59:36.025597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:56.834 [2024-11-06 13:59:36.025611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:56.834 [2024-11-06 13:59:36.025618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:56.834 passed 00:15:56.834 Test: blockdev nvme passthru rw ...passed 00:15:56.834 Test: blockdev nvme passthru vendor specific ...[2024-11-06 13:59:36.109932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:56.834 [2024-11-06 13:59:36.109946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:56.834 [2024-11-06 13:59:36.110299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:56.834 [2024-11-06 13:59:36.110310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:56.834 [2024-11-06 13:59:36.110648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:56.834 [2024-11-06 13:59:36.110658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:56.834 [2024-11-06 13:59:36.110989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:56.834 [2024-11-06 13:59:36.110999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:56.834 passed 00:15:57.093 Test: blockdev nvme admin passthru ...passed 00:15:57.093 Test: blockdev copy ...passed 00:15:57.093 00:15:57.093 Run Summary: Type Total Ran Passed Failed Inactive 00:15:57.093 suites 1 1 n/a 0 0 00:15:57.093 tests 23 23 23 0 0 00:15:57.093 asserts 152 152 152 0 n/a 00:15:57.093 00:15:57.093 Elapsed time = 1.341 seconds 00:15:57.353 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:57.353 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.353 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:57.353 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.353 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:57.353 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:15:57.353 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:57.353 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:15:57.353 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:57.353 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:15:57.353 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:57.353 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:57.353 rmmod nvme_tcp 00:15:57.353 rmmod nvme_fabrics 00:15:57.353 rmmod nvme_keyring 00:15:57.353 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:57.353 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:15:57.353 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:15:57.353 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 856410 ']' 00:15:57.353 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 856410 00:15:57.353 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 856410 ']' 00:15:57.353 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 856410 00:15:57.353 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:15:57.353 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:57.353 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 856410 00:15:57.353 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:15:57.353 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:15:57.353 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 856410' 00:15:57.353 killing process with pid 856410 00:15:57.353 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 856410 00:15:57.353 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 856410 00:15:57.612 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:57.612 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:57.612 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:57.612 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:15:57.612 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:15:57.612 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:57.612 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:15:57.612 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:57.612 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:57.612 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.612 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:57.612 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:00.152 00:16:00.152 real 0m10.161s 00:16:00.152 user 0m13.144s 00:16:00.152 sys 0m4.913s 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:00.152 ************************************ 00:16:00.152 END TEST nvmf_bdevio_no_huge 00:16:00.152 ************************************ 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:00.152 ************************************ 00:16:00.152 START TEST nvmf_tls 00:16:00.152 ************************************ 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:00.152 * Looking for test storage... 00:16:00.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:00.152 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:00.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.152 --rc genhtml_branch_coverage=1 00:16:00.152 --rc genhtml_function_coverage=1 00:16:00.153 --rc genhtml_legend=1 00:16:00.153 --rc geninfo_all_blocks=1 00:16:00.153 --rc geninfo_unexecuted_blocks=1 00:16:00.153 00:16:00.153 ' 00:16:00.153 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:00.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.153 --rc genhtml_branch_coverage=1 00:16:00.153 --rc genhtml_function_coverage=1 00:16:00.153 --rc genhtml_legend=1 00:16:00.153 --rc geninfo_all_blocks=1 00:16:00.153 --rc geninfo_unexecuted_blocks=1 00:16:00.153 00:16:00.153 ' 00:16:00.153 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:00.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.153 --rc genhtml_branch_coverage=1 00:16:00.153 --rc genhtml_function_coverage=1 00:16:00.153 --rc genhtml_legend=1 00:16:00.153 --rc geninfo_all_blocks=1 00:16:00.153 --rc geninfo_unexecuted_blocks=1 00:16:00.153 00:16:00.153 ' 00:16:00.153 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:00.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.153 --rc genhtml_branch_coverage=1 00:16:00.153 --rc genhtml_function_coverage=1 00:16:00.153 --rc genhtml_legend=1 00:16:00.153 --rc geninfo_all_blocks=1 00:16:00.153 --rc geninfo_unexecuted_blocks=1 00:16:00.153 00:16:00.153 ' 00:16:00.153 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:00.153 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:16:00.153 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:00.153 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:00.153 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:00.153 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:00.153 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:00.153 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:00.153 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:00.153 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:00.153 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:00.153 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:00.153 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:00.153 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:00.153 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:00.153 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:00.153 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:00.153 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:00.153 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:00.153 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:16:00.153 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:00.153 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:00.153 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:00.153 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.153 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.153 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.153 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:16:00.153 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.153 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:16:00.153 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:00.153 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:00.153 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:00.153 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:00.153 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:00.153 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:00.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:00.153 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:00.153 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:00.153 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:00.153 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:00.153 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:16:00.153 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:00.153 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:00.153 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:00.153 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:00.153 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:00.153 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:00.153 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:00.153 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.153 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:00.153 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:00.153 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:16:00.153 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:05.426 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:05.426 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:05.426 Found net devices under 0000:31:00.0: cvl_0_0 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:05.426 Found net devices under 0000:31:00.1: cvl_0_1 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:05.426 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:05.427 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:05.427 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:05.427 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.552 ms 00:16:05.427 00:16:05.427 --- 10.0.0.2 ping statistics --- 00:16:05.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.427 rtt min/avg/max/mdev = 0.552/0.552/0.552/0.000 ms 00:16:05.427 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:05.427 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:05.427 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:16:05.427 00:16:05.427 --- 10.0.0.1 ping statistics --- 00:16:05.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.427 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:16:05.427 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:05.427 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:16:05.427 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:05.427 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:05.427 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:05.427 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:05.427 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:05.427 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:05.427 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:05.427 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:05.427 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:05.427 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:05.427 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:05.427 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=861364 00:16:05.427 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 861364 00:16:05.427 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 861364 ']' 00:16:05.427 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.427 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:05.427 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.427 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:05.427 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:05.427 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:05.427 [2024-11-06 13:59:44.552120] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:16:05.427 [2024-11-06 13:59:44.552186] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:05.427 [2024-11-06 13:59:44.646890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.427 [2024-11-06 13:59:44.697496] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:05.427 [2024-11-06 13:59:44.697550] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:05.427 [2024-11-06 13:59:44.697559] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:05.427 [2024-11-06 13:59:44.697567] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:05.427 [2024-11-06 13:59:44.697573] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:05.427 [2024-11-06 13:59:44.698404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:06.361 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:06.361 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:16:06.361 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:06.361 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:06.361 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:06.361 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:06.361 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:16:06.361 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:06.361 true 00:16:06.361 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:06.361 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:16:06.620 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:16:06.620 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:16:06.620 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:06.620 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:06.620 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:16:06.879 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:16:06.879 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:16:06.879 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:07.138 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:07.138 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:16:07.138 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:16:07.138 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:16:07.138 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:07.138 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:16:07.396 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:16:07.396 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:16:07.396 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:07.654 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:07.654 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:16:07.654 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:16:07.654 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:16:07.654 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:07.912 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:07.912 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:16:07.912 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:16:07.912 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:16:07.912 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:16:07.912 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:16:07.912 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:16:07.912 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:16:07.913 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:16:07.913 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:16:07.913 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:16:07.913 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:07.913 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:16:07.913 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:16:07.913 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:16:07.913 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:16:07.913 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:16:07.913 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:16:07.913 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:16:08.171 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:08.171 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:16:08.171 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.9kBc8QC6cn 00:16:08.171 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:16:08.171 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.vdvzc4pgID 00:16:08.171 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:08.171 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:08.171 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.9kBc8QC6cn 00:16:08.171 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.vdvzc4pgID 00:16:08.171 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:08.171 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:16:08.429 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.9kBc8QC6cn 00:16:08.429 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.9kBc8QC6cn 00:16:08.429 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:08.688 [2024-11-06 13:59:47.729165] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:08.688 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:08.688 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:08.947 [2024-11-06 13:59:48.041916] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:08.947 [2024-11-06 13:59:48.042126] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:08.947 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:08.947 malloc0 00:16:08.947 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:09.205 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.9kBc8QC6cn 00:16:09.463 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:09.463 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.9kBc8QC6cn 00:16:21.672 Initializing NVMe Controllers 00:16:21.672 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:21.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:21.672 Initialization complete. Launching workers. 00:16:21.672 ======================================================== 00:16:21.672 Latency(us) 00:16:21.672 Device Information : IOPS MiB/s Average min max 00:16:21.672 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18810.27 73.48 3402.62 1095.50 4113.89 00:16:21.672 ======================================================== 00:16:21.672 Total : 18810.27 73.48 3402.62 1095.50 4113.89 00:16:21.672 00:16:21.672 13:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9kBc8QC6cn 00:16:21.672 13:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:21.672 13:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:21.672 13:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:21.672 13:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9kBc8QC6cn 00:16:21.672 13:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:21.672 13:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=864492 00:16:21.672 13:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:21.672 13:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 864492 /var/tmp/bdevperf.sock 00:16:21.672 13:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 864492 ']' 00:16:21.672 13:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:21.672 13:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:21.672 13:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:21.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:21.672 13:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:21.672 13:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:21.672 13:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:21.672 [2024-11-06 13:59:58.814141] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:16:21.672 [2024-11-06 13:59:58.814196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid864492 ] 00:16:21.672 [2024-11-06 13:59:58.891271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.672 [2024-11-06 13:59:58.926260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:21.672 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:21.672 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:16:21.672 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9kBc8QC6cn 00:16:21.672 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:21.672 [2024-11-06 13:59:59.870865] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:21.672 TLSTESTn1 00:16:21.672 13:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:21.672 Running I/O for 10 seconds... 00:16:22.863 2699.00 IOPS, 10.54 MiB/s [2024-11-06T13:00:03.081Z] 3152.00 IOPS, 12.31 MiB/s [2024-11-06T13:00:04.456Z] 3062.33 IOPS, 11.96 MiB/s [2024-11-06T13:00:05.390Z] 3065.75 IOPS, 11.98 MiB/s [2024-11-06T13:00:06.325Z] 3000.80 IOPS, 11.72 MiB/s [2024-11-06T13:00:07.259Z] 3220.83 IOPS, 12.58 MiB/s [2024-11-06T13:00:08.194Z] 3260.71 IOPS, 12.74 MiB/s [2024-11-06T13:00:09.128Z] 3211.75 IOPS, 12.55 MiB/s [2024-11-06T13:00:10.063Z] 3251.56 IOPS, 12.70 MiB/s [2024-11-06T13:00:10.322Z] 3200.50 IOPS, 12.50 MiB/s 00:16:31.038 Latency(us) 00:16:31.038 [2024-11-06T13:00:10.322Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.038 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:31.038 Verification LBA range: start 0x0 length 0x2000 00:16:31.038 TLSTESTn1 : 10.08 3187.52 12.45 0.00 0.00 40014.67 5843.63 80390.83 00:16:31.038 [2024-11-06T13:00:10.322Z] =================================================================================================================== 00:16:31.038 [2024-11-06T13:00:10.322Z] Total : 3187.52 12.45 0.00 0.00 40014.67 5843.63 80390.83 00:16:31.038 { 00:16:31.038 "results": [ 00:16:31.038 { 00:16:31.038 "job": "TLSTESTn1", 00:16:31.038 "core_mask": "0x4", 00:16:31.038 "workload": "verify", 00:16:31.038 "status": "finished", 00:16:31.038 "verify_range": { 00:16:31.038 "start": 0, 00:16:31.038 "length": 8192 00:16:31.038 }, 00:16:31.038 "queue_depth": 128, 00:16:31.038 "io_size": 4096, 00:16:31.038 "runtime": 10.080249, 00:16:31.038 "iops": 3187.5204670043368, 00:16:31.038 "mibps": 12.45125182423569, 00:16:31.038 "io_failed": 0, 00:16:31.038 "io_timeout": 0, 00:16:31.038 "avg_latency_us": 40014.668457668085, 00:16:31.038 "min_latency_us": 5843.626666666667, 00:16:31.038 "max_latency_us": 80390.82666666666 00:16:31.038 } 00:16:31.038 ], 00:16:31.038 "core_count": 1 00:16:31.038 } 00:16:31.038 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:31.038 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 864492 00:16:31.038 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 864492 ']' 00:16:31.038 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 864492 00:16:31.038 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:16:31.038 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:31.038 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 864492 00:16:31.038 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:16:31.038 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:16:31.038 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 864492' 00:16:31.038 killing process with pid 864492 00:16:31.038 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 864492 00:16:31.038 Received shutdown signal, test time was about 10.000000 seconds 00:16:31.038 00:16:31.038 Latency(us) 00:16:31.038 [2024-11-06T13:00:10.322Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.038 [2024-11-06T13:00:10.323Z] =================================================================================================================== 00:16:31.039 [2024-11-06T13:00:10.323Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:31.039 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 864492 00:16:31.039 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vdvzc4pgID 00:16:31.039 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:16:31.039 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vdvzc4pgID 00:16:31.039 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:31.039 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:31.039 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:31.039 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:31.039 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vdvzc4pgID 00:16:31.039 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:31.039 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:31.039 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:31.039 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.vdvzc4pgID 00:16:31.039 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:31.039 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=867569 00:16:31.039 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:31.039 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 867569 /var/tmp/bdevperf.sock 00:16:31.039 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 867569 ']' 00:16:31.039 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:31.039 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:31.039 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:31.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:31.039 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:31.039 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:31.039 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:31.297 [2024-11-06 14:00:10.322239] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:16:31.297 [2024-11-06 14:00:10.322299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid867569 ] 00:16:31.297 [2024-11-06 14:00:10.386868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.297 [2024-11-06 14:00:10.415254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:31.297 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:31.297 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:16:31.297 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vdvzc4pgID 00:16:31.556 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:31.556 [2024-11-06 14:00:10.781605] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:31.556 [2024-11-06 14:00:10.792462] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:31.556 [2024-11-06 14:00:10.792849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13bf960 (107): Transport endpoint is not connected 00:16:31.556 [2024-11-06 14:00:10.793846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13bf960 (9): Bad file descriptor 00:16:31.556 [2024-11-06 14:00:10.794848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:16:31.556 [2024-11-06 14:00:10.794854] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:31.556 [2024-11-06 14:00:10.794861] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:16:31.556 [2024-11-06 14:00:10.794869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:16:31.556 request: 00:16:31.556 { 00:16:31.556 "name": "TLSTEST", 00:16:31.556 "trtype": "tcp", 00:16:31.556 "traddr": "10.0.0.2", 00:16:31.557 "adrfam": "ipv4", 00:16:31.557 "trsvcid": "4420", 00:16:31.557 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:31.557 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:31.557 "prchk_reftag": false, 00:16:31.557 "prchk_guard": false, 00:16:31.557 "hdgst": false, 00:16:31.557 "ddgst": false, 00:16:31.557 "psk": "key0", 00:16:31.557 "allow_unrecognized_csi": false, 00:16:31.557 "method": "bdev_nvme_attach_controller", 00:16:31.557 "req_id": 1 00:16:31.557 } 00:16:31.557 Got JSON-RPC error response 00:16:31.557 response: 00:16:31.557 { 00:16:31.557 "code": -5, 00:16:31.557 "message": "Input/output error" 00:16:31.557 } 00:16:31.557 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 867569 00:16:31.557 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 867569 ']' 00:16:31.557 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 867569 00:16:31.557 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:16:31.557 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:31.557 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 867569 00:16:31.816 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:16:31.816 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:16:31.816 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 867569' 00:16:31.816 killing process with pid 867569 00:16:31.816 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 867569 00:16:31.816 Received shutdown signal, test time was about 10.000000 seconds 00:16:31.816 00:16:31.816 Latency(us) 00:16:31.816 [2024-11-06T13:00:11.100Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.816 [2024-11-06T13:00:11.100Z] =================================================================================================================== 00:16:31.816 [2024-11-06T13:00:11.100Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:31.816 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 867569 00:16:31.816 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:31.816 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:16:31.816 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:31.816 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:31.816 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:31.816 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9kBc8QC6cn 00:16:31.816 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:16:31.816 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9kBc8QC6cn 00:16:31.816 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:31.816 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:31.816 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:31.816 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:31.816 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9kBc8QC6cn 00:16:31.816 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:31.816 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:31.816 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:31.816 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9kBc8QC6cn 00:16:31.816 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:31.816 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=867722 00:16:31.816 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:31.816 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 867722 /var/tmp/bdevperf.sock 00:16:31.816 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 867722 ']' 00:16:31.816 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:31.816 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:31.816 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:31.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:31.816 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:31.816 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:31.816 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:31.816 [2024-11-06 14:00:10.988180] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:16:31.816 [2024-11-06 14:00:10.988234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid867722 ] 00:16:31.816 [2024-11-06 14:00:11.054104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.816 [2024-11-06 14:00:11.082814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:32.076 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:32.076 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:16:32.076 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9kBc8QC6cn 00:16:32.076 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:16:32.335 [2024-11-06 14:00:11.449076] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:32.335 [2024-11-06 14:00:11.453955] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:32.335 [2024-11-06 14:00:11.453976] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:32.335 [2024-11-06 14:00:11.453999] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:32.335 [2024-11-06 14:00:11.454147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f9960 (107): Transport endpoint is not connected 00:16:32.335 [2024-11-06 14:00:11.455142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f9960 (9): Bad file descriptor 00:16:32.335 [2024-11-06 14:00:11.456144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:16:32.335 [2024-11-06 14:00:11.456154] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:32.335 [2024-11-06 14:00:11.456159] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:16:32.335 [2024-11-06 14:00:11.456167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:16:32.335 request: 00:16:32.335 { 00:16:32.335 "name": "TLSTEST", 00:16:32.335 "trtype": "tcp", 00:16:32.335 "traddr": "10.0.0.2", 00:16:32.335 "adrfam": "ipv4", 00:16:32.335 "trsvcid": "4420", 00:16:32.335 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:32.335 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:32.335 "prchk_reftag": false, 00:16:32.335 "prchk_guard": false, 00:16:32.335 "hdgst": false, 00:16:32.335 "ddgst": false, 00:16:32.335 "psk": "key0", 00:16:32.335 "allow_unrecognized_csi": false, 00:16:32.335 "method": "bdev_nvme_attach_controller", 00:16:32.335 "req_id": 1 00:16:32.335 } 00:16:32.335 Got JSON-RPC error response 00:16:32.335 response: 00:16:32.335 { 00:16:32.335 "code": -5, 00:16:32.335 "message": "Input/output error" 00:16:32.335 } 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 867722 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 867722 ']' 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 867722 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 867722 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 867722' 00:16:32.335 killing process with pid 867722 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 867722 00:16:32.335 Received shutdown signal, test time was about 10.000000 seconds 00:16:32.335 00:16:32.335 Latency(us) 00:16:32.335 [2024-11-06T13:00:11.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.335 [2024-11-06T13:00:11.619Z] =================================================================================================================== 00:16:32.335 [2024-11-06T13:00:11.619Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 867722 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9kBc8QC6cn 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9kBc8QC6cn 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9kBc8QC6cn 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9kBc8QC6cn 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=867785 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 867785 /var/tmp/bdevperf.sock 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 867785 ']' 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:32.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:32.335 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:32.593 [2024-11-06 14:00:11.648330] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:16:32.593 [2024-11-06 14:00:11.648382] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid867785 ] 00:16:32.593 [2024-11-06 14:00:11.713160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.593 [2024-11-06 14:00:11.742014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:32.593 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:32.593 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:16:32.594 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9kBc8QC6cn 00:16:32.852 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:32.852 [2024-11-06 14:00:12.116610] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:32.852 [2024-11-06 14:00:12.125585] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:32.852 [2024-11-06 14:00:12.125605] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:32.852 [2024-11-06 14:00:12.125627] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:32.852 [2024-11-06 14:00:12.125827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1688960 (107): Transport endpoint is not connected 00:16:32.852 [2024-11-06 14:00:12.126823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1688960 (9): Bad file descriptor 00:16:32.852 [2024-11-06 14:00:12.127825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:16:32.852 [2024-11-06 14:00:12.127832] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:32.852 [2024-11-06 14:00:12.127838] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:16:32.852 [2024-11-06 14:00:12.127846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:16:32.852 request: 00:16:32.852 { 00:16:32.852 "name": "TLSTEST", 00:16:32.852 "trtype": "tcp", 00:16:32.852 "traddr": "10.0.0.2", 00:16:32.852 "adrfam": "ipv4", 00:16:32.852 "trsvcid": "4420", 00:16:32.852 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:32.852 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:32.852 "prchk_reftag": false, 00:16:32.852 "prchk_guard": false, 00:16:32.852 "hdgst": false, 00:16:32.852 "ddgst": false, 00:16:32.852 "psk": "key0", 00:16:32.852 "allow_unrecognized_csi": false, 00:16:32.852 "method": "bdev_nvme_attach_controller", 00:16:32.852 "req_id": 1 00:16:32.852 } 00:16:32.852 Got JSON-RPC error response 00:16:32.852 response: 00:16:32.852 { 00:16:32.852 "code": -5, 00:16:32.852 "message": "Input/output error" 00:16:32.852 } 00:16:33.111 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 867785 00:16:33.111 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 867785 ']' 00:16:33.112 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 867785 00:16:33.112 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:16:33.112 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:33.112 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 867785 00:16:33.112 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:16:33.112 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:16:33.112 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 867785' 00:16:33.112 killing process with pid 867785 00:16:33.112 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 867785 00:16:33.112 Received shutdown signal, test time was about 10.000000 seconds 00:16:33.112 00:16:33.112 Latency(us) 00:16:33.112 [2024-11-06T13:00:12.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.112 [2024-11-06T13:00:12.396Z] =================================================================================================================== 00:16:33.112 [2024-11-06T13:00:12.396Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:33.112 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 867785 00:16:33.112 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:33.112 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:16:33.112 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:33.112 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:33.112 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:33.112 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:33.112 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:16:33.112 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:33.112 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:33.112 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:33.112 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:33.112 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:33.112 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:33.112 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:33.112 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:33.112 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:33.112 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:16:33.112 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:33.112 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=868068 00:16:33.112 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:33.112 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 868068 /var/tmp/bdevperf.sock 00:16:33.112 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 868068 ']' 00:16:33.112 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:33.112 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:33.112 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:33.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:33.112 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:33.112 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:33.112 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:33.112 [2024-11-06 14:00:12.318572] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:16:33.112 [2024-11-06 14:00:12.318626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid868068 ] 00:16:33.112 [2024-11-06 14:00:12.384344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.371 [2024-11-06 14:00:12.412551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:33.371 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:33.371 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:16:33.371 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:16:33.371 [2024-11-06 14:00:12.622453] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:16:33.371 [2024-11-06 14:00:12.622480] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:33.371 request: 00:16:33.371 { 00:16:33.371 "name": "key0", 00:16:33.371 "path": "", 00:16:33.371 "method": "keyring_file_add_key", 00:16:33.371 "req_id": 1 00:16:33.371 } 00:16:33.371 Got JSON-RPC error response 00:16:33.371 response: 00:16:33.371 { 00:16:33.371 "code": -1, 00:16:33.371 "message": "Operation not permitted" 00:16:33.371 } 00:16:33.371 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:33.630 [2024-11-06 14:00:12.782941] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:33.630 [2024-11-06 14:00:12.782971] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:16:33.630 request: 00:16:33.630 { 00:16:33.630 "name": "TLSTEST", 00:16:33.630 "trtype": "tcp", 00:16:33.630 "traddr": "10.0.0.2", 00:16:33.630 "adrfam": "ipv4", 00:16:33.630 "trsvcid": "4420", 00:16:33.630 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:33.630 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:33.630 "prchk_reftag": false, 00:16:33.630 "prchk_guard": false, 00:16:33.630 "hdgst": false, 00:16:33.630 "ddgst": false, 00:16:33.630 "psk": "key0", 00:16:33.630 "allow_unrecognized_csi": false, 00:16:33.630 "method": "bdev_nvme_attach_controller", 00:16:33.630 "req_id": 1 00:16:33.630 } 00:16:33.630 Got JSON-RPC error response 00:16:33.630 response: 00:16:33.630 { 00:16:33.630 "code": -126, 00:16:33.630 "message": "Required key not available" 00:16:33.630 } 00:16:33.630 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 868068 00:16:33.630 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 868068 ']' 00:16:33.630 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 868068 00:16:33.630 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:16:33.630 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:33.630 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 868068 00:16:33.630 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:16:33.630 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:16:33.630 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 868068' 00:16:33.630 killing process with pid 868068 00:16:33.630 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 868068 00:16:33.630 Received shutdown signal, test time was about 10.000000 seconds 00:16:33.630 00:16:33.630 Latency(us) 00:16:33.630 [2024-11-06T13:00:12.914Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.630 [2024-11-06T13:00:12.914Z] =================================================================================================================== 00:16:33.630 [2024-11-06T13:00:12.914Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:33.630 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 868068 00:16:33.889 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:33.889 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:16:33.889 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:33.889 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:33.889 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:33.889 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 861364 00:16:33.889 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 861364 ']' 00:16:33.889 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 861364 00:16:33.889 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:16:33.889 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:33.889 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 861364 00:16:33.889 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:16:33.889 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:16:33.889 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 861364' 00:16:33.889 killing process with pid 861364 00:16:33.889 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 861364 00:16:33.890 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 861364 00:16:33.890 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:16:33.890 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:16:33.890 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:16:33.890 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:16:33.890 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:33.890 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:16:33.890 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:16:33.890 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:33.890 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:16:33.890 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.KSrNGHk957 00:16:33.890 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:33.890 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.KSrNGHk957 00:16:33.890 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:16:33.890 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:33.890 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:33.890 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:33.890 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=868157 00:16:33.890 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 868157 00:16:33.890 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 868157 ']' 00:16:33.890 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.890 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:33.890 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.890 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:33.890 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:33.890 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:34.147 [2024-11-06 14:00:13.175548] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:16:34.147 [2024-11-06 14:00:13.175602] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:34.147 [2024-11-06 14:00:13.247839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.147 [2024-11-06 14:00:13.276681] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:34.147 [2024-11-06 14:00:13.276710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:34.147 [2024-11-06 14:00:13.276716] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:34.147 [2024-11-06 14:00:13.276721] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:34.148 [2024-11-06 14:00:13.276725] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:34.148 [2024-11-06 14:00:13.277210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.148 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:34.148 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:16:34.148 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:34.148 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:34.148 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:34.148 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.148 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.KSrNGHk957 00:16:34.148 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.KSrNGHk957 00:16:34.148 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:34.406 [2024-11-06 14:00:13.516765] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:34.406 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:34.406 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:34.664 [2024-11-06 14:00:13.821499] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:34.664 [2024-11-06 14:00:13.821699] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:34.664 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:34.922 malloc0 00:16:34.922 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:34.922 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.KSrNGHk957 00:16:35.181 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:35.181 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KSrNGHk957 00:16:35.181 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:35.181 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:35.181 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:35.181 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.KSrNGHk957 00:16:35.181 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:35.181 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=868454 00:16:35.181 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:35.181 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:35.181 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 868454 /var/tmp/bdevperf.sock 00:16:35.181 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 868454 ']' 00:16:35.181 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:35.181 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:35.181 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:35.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:35.181 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:35.181 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:35.181 [2024-11-06 14:00:14.458156] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:16:35.181 [2024-11-06 14:00:14.458197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid868454 ] 00:16:35.440 [2024-11-06 14:00:14.514325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.440 [2024-11-06 14:00:14.543164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:35.440 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:35.440 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:16:35.440 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KSrNGHk957 00:16:35.698 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:35.698 [2024-11-06 14:00:14.905503] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:35.957 TLSTESTn1 00:16:35.957 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:35.957 Running I/O for 10 seconds... 00:16:37.829 1151.00 IOPS, 4.50 MiB/s [2024-11-06T13:00:18.492Z] 1663.50 IOPS, 6.50 MiB/s [2024-11-06T13:00:19.428Z] 1905.67 IOPS, 7.44 MiB/s [2024-11-06T13:00:20.364Z] 1933.75 IOPS, 7.55 MiB/s [2024-11-06T13:00:21.301Z] 1839.60 IOPS, 7.19 MiB/s [2024-11-06T13:00:22.236Z] 1926.33 IOPS, 7.52 MiB/s [2024-11-06T13:00:23.170Z] 1994.14 IOPS, 7.79 MiB/s [2024-11-06T13:00:24.106Z] 1942.50 IOPS, 7.59 MiB/s [2024-11-06T13:00:25.482Z] 1910.33 IOPS, 7.46 MiB/s [2024-11-06T13:00:25.482Z] 2006.70 IOPS, 7.84 MiB/s 00:16:46.198 Latency(us) 00:16:46.198 [2024-11-06T13:00:25.482Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.198 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:46.198 Verification LBA range: start 0x0 length 0x2000 00:16:46.198 TLSTESTn1 : 10.08 2003.46 7.83 0.00 0.00 63691.48 4942.51 178257.92 00:16:46.198 [2024-11-06T13:00:25.482Z] =================================================================================================================== 00:16:46.198 [2024-11-06T13:00:25.482Z] Total : 2003.46 7.83 0.00 0.00 63691.48 4942.51 178257.92 00:16:46.198 { 00:16:46.198 "results": [ 00:16:46.198 { 00:16:46.198 "job": "TLSTESTn1", 00:16:46.198 "core_mask": "0x4", 00:16:46.198 "workload": "verify", 00:16:46.198 "status": "finished", 00:16:46.198 "verify_range": { 00:16:46.198 "start": 0, 00:16:46.198 "length": 8192 00:16:46.198 }, 00:16:46.198 "queue_depth": 128, 00:16:46.198 "io_size": 4096, 00:16:46.198 "runtime": 10.079556, 00:16:46.198 "iops": 2003.4612635715303, 00:16:46.198 "mibps": 7.82602056082629, 00:16:46.198 "io_failed": 0, 00:16:46.198 "io_timeout": 0, 00:16:46.198 "avg_latency_us": 63691.475503614936, 00:16:46.198 "min_latency_us": 4942.506666666667, 00:16:46.198 "max_latency_us": 178257.92 00:16:46.198 } 00:16:46.198 ], 00:16:46.198 "core_count": 1 00:16:46.198 } 00:16:46.198 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:46.198 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 868454 00:16:46.198 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 868454 ']' 00:16:46.198 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 868454 00:16:46.198 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:16:46.198 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:46.198 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 868454 00:16:46.198 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:16:46.198 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:16:46.198 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 868454' 00:16:46.198 killing process with pid 868454 00:16:46.198 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 868454 00:16:46.198 Received shutdown signal, test time was about 10.000000 seconds 00:16:46.198 00:16:46.198 Latency(us) 00:16:46.198 [2024-11-06T13:00:25.482Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.198 [2024-11-06T13:00:25.483Z] =================================================================================================================== 00:16:46.199 [2024-11-06T13:00:25.483Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:46.199 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 868454 00:16:46.199 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.KSrNGHk957 00:16:46.199 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KSrNGHk957 00:16:46.199 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:16:46.199 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KSrNGHk957 00:16:46.199 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:46.199 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:46.199 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:46.199 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:46.199 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KSrNGHk957 00:16:46.199 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:46.199 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:46.199 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:46.199 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.KSrNGHk957 00:16:46.199 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:46.199 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=870874 00:16:46.199 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:46.199 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 870874 /var/tmp/bdevperf.sock 00:16:46.199 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 870874 ']' 00:16:46.199 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:46.199 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:46.199 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:46.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:46.199 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:46.199 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:46.199 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:46.199 [2024-11-06 14:00:25.366334] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:16:46.199 [2024-11-06 14:00:25.366390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid870874 ] 00:16:46.199 [2024-11-06 14:00:25.430844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.199 [2024-11-06 14:00:25.459722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.457 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:46.457 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:16:46.457 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KSrNGHk957 00:16:46.457 [2024-11-06 14:00:25.669626] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.KSrNGHk957': 0100666 00:16:46.457 [2024-11-06 14:00:25.669647] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:46.457 request: 00:16:46.457 { 00:16:46.457 "name": "key0", 00:16:46.457 "path": "/tmp/tmp.KSrNGHk957", 00:16:46.457 "method": "keyring_file_add_key", 00:16:46.457 "req_id": 1 00:16:46.457 } 00:16:46.457 Got JSON-RPC error response 00:16:46.457 response: 00:16:46.457 { 00:16:46.457 "code": -1, 00:16:46.457 "message": "Operation not permitted" 00:16:46.457 } 00:16:46.457 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:46.716 [2024-11-06 14:00:25.822072] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:46.716 [2024-11-06 14:00:25.822096] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:16:46.716 request: 00:16:46.716 { 00:16:46.716 "name": "TLSTEST", 00:16:46.716 "trtype": "tcp", 00:16:46.716 "traddr": "10.0.0.2", 00:16:46.716 "adrfam": "ipv4", 00:16:46.716 "trsvcid": "4420", 00:16:46.716 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:46.716 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:46.716 "prchk_reftag": false, 00:16:46.716 "prchk_guard": false, 00:16:46.716 "hdgst": false, 00:16:46.716 "ddgst": false, 00:16:46.716 "psk": "key0", 00:16:46.716 "allow_unrecognized_csi": false, 00:16:46.716 "method": "bdev_nvme_attach_controller", 00:16:46.716 "req_id": 1 00:16:46.716 } 00:16:46.716 Got JSON-RPC error response 00:16:46.716 response: 00:16:46.716 { 00:16:46.716 "code": -126, 00:16:46.716 "message": "Required key not available" 00:16:46.716 } 00:16:46.716 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 870874 00:16:46.716 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 870874 ']' 00:16:46.716 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 870874 00:16:46.716 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:16:46.716 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:46.716 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 870874 00:16:46.716 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:16:46.716 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:16:46.716 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 870874' 00:16:46.716 killing process with pid 870874 00:16:46.716 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 870874 00:16:46.716 Received shutdown signal, test time was about 10.000000 seconds 00:16:46.716 00:16:46.716 Latency(us) 00:16:46.716 [2024-11-06T13:00:26.000Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.716 [2024-11-06T13:00:26.000Z] =================================================================================================================== 00:16:46.716 [2024-11-06T13:00:26.000Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:46.716 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 870874 00:16:46.716 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:46.716 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:16:46.716 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:46.716 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:46.716 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:46.716 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 868157 00:16:46.716 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 868157 ']' 00:16:46.716 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 868157 00:16:46.716 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:16:46.716 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:46.716 14:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 868157 00:16:46.975 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:16:46.975 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:16:46.975 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 868157' 00:16:46.975 killing process with pid 868157 00:16:46.975 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 868157 00:16:46.975 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 868157 00:16:46.975 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:16:46.975 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:46.975 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:46.975 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:46.975 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=871133 00:16:46.975 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 871133 00:16:46.975 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 871133 ']' 00:16:46.975 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.975 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:46.976 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.976 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:46.976 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:46.976 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:46.976 [2024-11-06 14:00:26.171663] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:16:46.976 [2024-11-06 14:00:26.171714] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.976 [2024-11-06 14:00:26.247130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.234 [2024-11-06 14:00:26.275213] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:47.234 [2024-11-06 14:00:26.275243] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:47.234 [2024-11-06 14:00:26.275253] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:47.234 [2024-11-06 14:00:26.275258] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:47.234 [2024-11-06 14:00:26.275262] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:47.234 [2024-11-06 14:00:26.275754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:47.802 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:47.802 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:16:47.802 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:47.802 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:47.802 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:47.802 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:47.802 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.KSrNGHk957 00:16:47.802 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:16:47.802 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.KSrNGHk957 00:16:47.802 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:16:47.802 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:47.802 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:16:47.802 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:47.802 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.KSrNGHk957 00:16:47.802 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.KSrNGHk957 00:16:47.802 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:48.061 [2024-11-06 14:00:27.112547] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:48.061 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:48.061 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:48.321 [2024-11-06 14:00:27.425314] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:48.321 [2024-11-06 14:00:27.425514] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:48.321 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:48.321 malloc0 00:16:48.321 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:48.581 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.KSrNGHk957 00:16:48.840 [2024-11-06 14:00:27.896134] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.KSrNGHk957': 0100666 00:16:48.840 [2024-11-06 14:00:27.896151] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:48.840 request: 00:16:48.840 { 00:16:48.840 "name": "key0", 00:16:48.840 "path": "/tmp/tmp.KSrNGHk957", 00:16:48.840 "method": "keyring_file_add_key", 00:16:48.840 "req_id": 1 00:16:48.840 } 00:16:48.840 Got JSON-RPC error response 00:16:48.840 response: 00:16:48.840 { 00:16:48.840 "code": -1, 00:16:48.840 "message": "Operation not permitted" 00:16:48.840 } 00:16:48.840 14:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:48.840 [2024-11-06 14:00:28.052536] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:16:48.840 [2024-11-06 14:00:28.052564] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:16:48.840 request: 00:16:48.840 { 00:16:48.840 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:48.840 "host": "nqn.2016-06.io.spdk:host1", 00:16:48.840 "psk": "key0", 00:16:48.840 "method": "nvmf_subsystem_add_host", 00:16:48.840 "req_id": 1 00:16:48.840 } 00:16:48.841 Got JSON-RPC error response 00:16:48.841 response: 00:16:48.841 { 00:16:48.841 "code": -32603, 00:16:48.841 "message": "Internal error" 00:16:48.841 } 00:16:48.841 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:16:48.841 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:48.841 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:48.841 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:48.841 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 871133 00:16:48.841 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 871133 ']' 00:16:48.841 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 871133 00:16:48.841 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:16:48.841 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:48.841 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 871133 00:16:48.841 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:16:48.841 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:16:48.841 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 871133' 00:16:48.841 killing process with pid 871133 00:16:48.841 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 871133 00:16:48.841 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 871133 00:16:49.100 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.KSrNGHk957 00:16:49.100 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:16:49.100 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:49.100 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:49.100 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:49.100 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=871517 00:16:49.100 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 871517 00:16:49.100 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 871517 ']' 00:16:49.100 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.100 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:49.100 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.100 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:49.100 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:49.100 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:49.100 [2024-11-06 14:00:28.257756] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:16:49.100 [2024-11-06 14:00:28.257808] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.100 [2024-11-06 14:00:28.329235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.100 [2024-11-06 14:00:28.356919] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.100 [2024-11-06 14:00:28.356950] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.100 [2024-11-06 14:00:28.356956] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:49.100 [2024-11-06 14:00:28.356961] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:49.100 [2024-11-06 14:00:28.356965] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.100 [2024-11-06 14:00:28.357434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.359 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:49.359 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:16:49.359 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:49.359 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:49.359 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:49.359 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.359 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.KSrNGHk957 00:16:49.359 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.KSrNGHk957 00:16:49.359 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:49.359 [2024-11-06 14:00:28.597051] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:49.359 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:49.619 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:49.619 [2024-11-06 14:00:28.901787] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:49.619 [2024-11-06 14:00:28.901979] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.878 14:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:49.878 malloc0 00:16:49.878 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:50.139 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.KSrNGHk957 00:16:50.139 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:50.398 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=871858 00:16:50.398 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:50.398 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 871858 /var/tmp/bdevperf.sock 00:16:50.398 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 871858 ']' 00:16:50.398 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:50.398 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:50.398 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:50.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:50.399 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:50.399 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:50.399 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:50.399 [2024-11-06 14:00:29.553381] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:16:50.399 [2024-11-06 14:00:29.553423] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid871858 ] 00:16:50.399 [2024-11-06 14:00:29.609826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.399 [2024-11-06 14:00:29.638979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:50.659 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:50.659 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:16:50.659 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KSrNGHk957 00:16:50.659 14:00:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:50.918 [2024-11-06 14:00:30.005315] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:50.918 TLSTESTn1 00:16:50.918 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:16:51.177 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:16:51.177 "subsystems": [ 00:16:51.177 { 00:16:51.177 "subsystem": "keyring", 00:16:51.177 "config": [ 00:16:51.177 { 00:16:51.177 "method": "keyring_file_add_key", 00:16:51.177 "params": { 00:16:51.177 "name": "key0", 00:16:51.177 "path": "/tmp/tmp.KSrNGHk957" 00:16:51.177 } 00:16:51.177 } 00:16:51.177 ] 00:16:51.177 }, 00:16:51.177 { 00:16:51.177 "subsystem": "iobuf", 00:16:51.177 "config": [ 00:16:51.177 { 00:16:51.177 "method": "iobuf_set_options", 00:16:51.177 "params": { 00:16:51.177 "small_pool_count": 8192, 00:16:51.177 "large_pool_count": 1024, 00:16:51.177 "small_bufsize": 8192, 00:16:51.177 "large_bufsize": 135168, 00:16:51.177 "enable_numa": false 00:16:51.177 } 00:16:51.177 } 00:16:51.177 ] 00:16:51.177 }, 00:16:51.177 { 00:16:51.177 "subsystem": "sock", 00:16:51.177 "config": [ 00:16:51.177 { 00:16:51.177 "method": "sock_set_default_impl", 00:16:51.177 "params": { 00:16:51.177 "impl_name": "posix" 00:16:51.177 } 00:16:51.177 }, 00:16:51.177 { 00:16:51.177 "method": "sock_impl_set_options", 00:16:51.177 "params": { 00:16:51.177 "impl_name": "ssl", 00:16:51.177 "recv_buf_size": 4096, 00:16:51.178 "send_buf_size": 4096, 00:16:51.178 "enable_recv_pipe": true, 00:16:51.178 "enable_quickack": false, 00:16:51.178 "enable_placement_id": 0, 00:16:51.178 "enable_zerocopy_send_server": true, 00:16:51.178 "enable_zerocopy_send_client": false, 00:16:51.178 "zerocopy_threshold": 0, 00:16:51.178 "tls_version": 0, 00:16:51.178 "enable_ktls": false 00:16:51.178 } 00:16:51.178 }, 00:16:51.178 { 00:16:51.178 "method": "sock_impl_set_options", 00:16:51.178 "params": { 00:16:51.178 "impl_name": "posix", 00:16:51.178 "recv_buf_size": 2097152, 00:16:51.178 "send_buf_size": 2097152, 00:16:51.178 "enable_recv_pipe": true, 00:16:51.178 "enable_quickack": false, 00:16:51.178 "enable_placement_id": 0, 00:16:51.178 "enable_zerocopy_send_server": true, 00:16:51.178 "enable_zerocopy_send_client": false, 00:16:51.178 "zerocopy_threshold": 0, 00:16:51.178 "tls_version": 0, 00:16:51.178 "enable_ktls": false 00:16:51.178 } 00:16:51.178 } 00:16:51.178 ] 00:16:51.178 }, 00:16:51.178 { 00:16:51.178 "subsystem": "vmd", 00:16:51.178 "config": [] 00:16:51.178 }, 00:16:51.178 { 00:16:51.178 "subsystem": "accel", 00:16:51.178 "config": [ 00:16:51.178 { 00:16:51.178 "method": "accel_set_options", 00:16:51.178 "params": { 00:16:51.178 "small_cache_size": 128, 00:16:51.178 "large_cache_size": 16, 00:16:51.178 "task_count": 2048, 00:16:51.178 "sequence_count": 2048, 00:16:51.178 "buf_count": 2048 00:16:51.178 } 00:16:51.178 } 00:16:51.178 ] 00:16:51.178 }, 00:16:51.178 { 00:16:51.178 "subsystem": "bdev", 00:16:51.178 "config": [ 00:16:51.178 { 00:16:51.178 "method": "bdev_set_options", 00:16:51.178 "params": { 00:16:51.178 "bdev_io_pool_size": 65535, 00:16:51.178 "bdev_io_cache_size": 256, 00:16:51.178 "bdev_auto_examine": true, 00:16:51.178 "iobuf_small_cache_size": 128, 00:16:51.178 "iobuf_large_cache_size": 16 00:16:51.178 } 00:16:51.178 }, 00:16:51.178 { 00:16:51.178 "method": "bdev_raid_set_options", 00:16:51.178 "params": { 00:16:51.178 "process_window_size_kb": 1024, 00:16:51.178 "process_max_bandwidth_mb_sec": 0 00:16:51.178 } 00:16:51.178 }, 00:16:51.178 { 00:16:51.178 "method": "bdev_iscsi_set_options", 00:16:51.178 "params": { 00:16:51.178 "timeout_sec": 30 00:16:51.178 } 00:16:51.178 }, 00:16:51.178 { 00:16:51.178 "method": "bdev_nvme_set_options", 00:16:51.178 "params": { 00:16:51.178 "action_on_timeout": "none", 00:16:51.178 "timeout_us": 0, 00:16:51.178 "timeout_admin_us": 0, 00:16:51.178 "keep_alive_timeout_ms": 10000, 00:16:51.178 "arbitration_burst": 0, 00:16:51.178 "low_priority_weight": 0, 00:16:51.178 "medium_priority_weight": 0, 00:16:51.178 "high_priority_weight": 0, 00:16:51.178 "nvme_adminq_poll_period_us": 10000, 00:16:51.178 "nvme_ioq_poll_period_us": 0, 00:16:51.178 "io_queue_requests": 0, 00:16:51.178 "delay_cmd_submit": true, 00:16:51.178 "transport_retry_count": 4, 00:16:51.178 "bdev_retry_count": 3, 00:16:51.178 "transport_ack_timeout": 0, 00:16:51.178 "ctrlr_loss_timeout_sec": 0, 00:16:51.178 "reconnect_delay_sec": 0, 00:16:51.178 "fast_io_fail_timeout_sec": 0, 00:16:51.178 "disable_auto_failback": false, 00:16:51.178 "generate_uuids": false, 00:16:51.178 "transport_tos": 0, 00:16:51.178 "nvme_error_stat": false, 00:16:51.178 "rdma_srq_size": 0, 00:16:51.178 "io_path_stat": false, 00:16:51.178 "allow_accel_sequence": false, 00:16:51.178 "rdma_max_cq_size": 0, 00:16:51.178 "rdma_cm_event_timeout_ms": 0, 00:16:51.178 "dhchap_digests": [ 00:16:51.178 "sha256", 00:16:51.178 "sha384", 00:16:51.178 "sha512" 00:16:51.178 ], 00:16:51.178 "dhchap_dhgroups": [ 00:16:51.178 "null", 00:16:51.178 "ffdhe2048", 00:16:51.178 "ffdhe3072", 00:16:51.178 "ffdhe4096", 00:16:51.178 "ffdhe6144", 00:16:51.178 "ffdhe8192" 00:16:51.178 ] 00:16:51.178 } 00:16:51.178 }, 00:16:51.178 { 00:16:51.178 "method": "bdev_nvme_set_hotplug", 00:16:51.178 "params": { 00:16:51.178 "period_us": 100000, 00:16:51.178 "enable": false 00:16:51.178 } 00:16:51.178 }, 00:16:51.178 { 00:16:51.178 "method": "bdev_malloc_create", 00:16:51.178 "params": { 00:16:51.178 "name": "malloc0", 00:16:51.178 "num_blocks": 8192, 00:16:51.178 "block_size": 4096, 00:16:51.178 "physical_block_size": 4096, 00:16:51.178 "uuid": "88f5d978-1ba0-4119-aca9-b5e0eda47771", 00:16:51.178 "optimal_io_boundary": 0, 00:16:51.178 "md_size": 0, 00:16:51.178 "dif_type": 0, 00:16:51.178 "dif_is_head_of_md": false, 00:16:51.178 "dif_pi_format": 0 00:16:51.178 } 00:16:51.178 }, 00:16:51.178 { 00:16:51.178 "method": "bdev_wait_for_examine" 00:16:51.178 } 00:16:51.178 ] 00:16:51.178 }, 00:16:51.178 { 00:16:51.178 "subsystem": "nbd", 00:16:51.178 "config": [] 00:16:51.178 }, 00:16:51.178 { 00:16:51.178 "subsystem": "scheduler", 00:16:51.178 "config": [ 00:16:51.178 { 00:16:51.178 "method": "framework_set_scheduler", 00:16:51.178 "params": { 00:16:51.178 "name": "static" 00:16:51.178 } 00:16:51.178 } 00:16:51.178 ] 00:16:51.178 }, 00:16:51.178 { 00:16:51.178 "subsystem": "nvmf", 00:16:51.178 "config": [ 00:16:51.178 { 00:16:51.178 "method": "nvmf_set_config", 00:16:51.178 "params": { 00:16:51.178 "discovery_filter": "match_any", 00:16:51.178 "admin_cmd_passthru": { 00:16:51.178 "identify_ctrlr": false 00:16:51.178 }, 00:16:51.178 "dhchap_digests": [ 00:16:51.178 "sha256", 00:16:51.178 "sha384", 00:16:51.178 "sha512" 00:16:51.178 ], 00:16:51.178 "dhchap_dhgroups": [ 00:16:51.178 "null", 00:16:51.178 "ffdhe2048", 00:16:51.178 "ffdhe3072", 00:16:51.178 "ffdhe4096", 00:16:51.178 "ffdhe6144", 00:16:51.178 "ffdhe8192" 00:16:51.178 ] 00:16:51.178 } 00:16:51.178 }, 00:16:51.178 { 00:16:51.178 "method": "nvmf_set_max_subsystems", 00:16:51.178 "params": { 00:16:51.178 "max_subsystems": 1024 00:16:51.178 } 00:16:51.178 }, 00:16:51.178 { 00:16:51.178 "method": "nvmf_set_crdt", 00:16:51.178 "params": { 00:16:51.178 "crdt1": 0, 00:16:51.178 "crdt2": 0, 00:16:51.178 "crdt3": 0 00:16:51.178 } 00:16:51.178 }, 00:16:51.178 { 00:16:51.178 "method": "nvmf_create_transport", 00:16:51.178 "params": { 00:16:51.178 "trtype": "TCP", 00:16:51.178 "max_queue_depth": 128, 00:16:51.178 "max_io_qpairs_per_ctrlr": 127, 00:16:51.178 "in_capsule_data_size": 4096, 00:16:51.178 "max_io_size": 131072, 00:16:51.178 "io_unit_size": 131072, 00:16:51.178 "max_aq_depth": 128, 00:16:51.178 "num_shared_buffers": 511, 00:16:51.178 "buf_cache_size": 4294967295, 00:16:51.178 "dif_insert_or_strip": false, 00:16:51.178 "zcopy": false, 00:16:51.178 "c2h_success": false, 00:16:51.178 "sock_priority": 0, 00:16:51.178 "abort_timeout_sec": 1, 00:16:51.178 "ack_timeout": 0, 00:16:51.178 "data_wr_pool_size": 0 00:16:51.178 } 00:16:51.178 }, 00:16:51.178 { 00:16:51.178 "method": "nvmf_create_subsystem", 00:16:51.178 "params": { 00:16:51.178 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.178 "allow_any_host": false, 00:16:51.178 "serial_number": "SPDK00000000000001", 00:16:51.178 "model_number": "SPDK bdev Controller", 00:16:51.178 "max_namespaces": 10, 00:16:51.178 "min_cntlid": 1, 00:16:51.178 "max_cntlid": 65519, 00:16:51.178 "ana_reporting": false 00:16:51.178 } 00:16:51.178 }, 00:16:51.178 { 00:16:51.178 "method": "nvmf_subsystem_add_host", 00:16:51.178 "params": { 00:16:51.178 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.178 "host": "nqn.2016-06.io.spdk:host1", 00:16:51.178 "psk": "key0" 00:16:51.178 } 00:16:51.178 }, 00:16:51.179 { 00:16:51.179 "method": "nvmf_subsystem_add_ns", 00:16:51.179 "params": { 00:16:51.179 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.179 "namespace": { 00:16:51.179 "nsid": 1, 00:16:51.179 "bdev_name": "malloc0", 00:16:51.179 "nguid": "88F5D9781BA04119ACA9B5E0EDA47771", 00:16:51.179 "uuid": "88f5d978-1ba0-4119-aca9-b5e0eda47771", 00:16:51.179 "no_auto_visible": false 00:16:51.179 } 00:16:51.179 } 00:16:51.179 }, 00:16:51.179 { 00:16:51.179 "method": "nvmf_subsystem_add_listener", 00:16:51.179 "params": { 00:16:51.179 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.179 "listen_address": { 00:16:51.179 "trtype": "TCP", 00:16:51.179 "adrfam": "IPv4", 00:16:51.179 "traddr": "10.0.0.2", 00:16:51.179 "trsvcid": "4420" 00:16:51.179 }, 00:16:51.179 "secure_channel": true 00:16:51.179 } 00:16:51.179 } 00:16:51.179 ] 00:16:51.179 } 00:16:51.179 ] 00:16:51.179 }' 00:16:51.179 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:51.439 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:16:51.439 "subsystems": [ 00:16:51.439 { 00:16:51.439 "subsystem": "keyring", 00:16:51.439 "config": [ 00:16:51.439 { 00:16:51.439 "method": "keyring_file_add_key", 00:16:51.439 "params": { 00:16:51.439 "name": "key0", 00:16:51.439 "path": "/tmp/tmp.KSrNGHk957" 00:16:51.439 } 00:16:51.439 } 00:16:51.439 ] 00:16:51.439 }, 00:16:51.439 { 00:16:51.439 "subsystem": "iobuf", 00:16:51.439 "config": [ 00:16:51.439 { 00:16:51.439 "method": "iobuf_set_options", 00:16:51.439 "params": { 00:16:51.439 "small_pool_count": 8192, 00:16:51.439 "large_pool_count": 1024, 00:16:51.439 "small_bufsize": 8192, 00:16:51.439 "large_bufsize": 135168, 00:16:51.439 "enable_numa": false 00:16:51.439 } 00:16:51.439 } 00:16:51.439 ] 00:16:51.439 }, 00:16:51.439 { 00:16:51.439 "subsystem": "sock", 00:16:51.439 "config": [ 00:16:51.439 { 00:16:51.439 "method": "sock_set_default_impl", 00:16:51.439 "params": { 00:16:51.439 "impl_name": "posix" 00:16:51.439 } 00:16:51.439 }, 00:16:51.439 { 00:16:51.439 "method": "sock_impl_set_options", 00:16:51.439 "params": { 00:16:51.439 "impl_name": "ssl", 00:16:51.439 "recv_buf_size": 4096, 00:16:51.439 "send_buf_size": 4096, 00:16:51.439 "enable_recv_pipe": true, 00:16:51.439 "enable_quickack": false, 00:16:51.439 "enable_placement_id": 0, 00:16:51.439 "enable_zerocopy_send_server": true, 00:16:51.439 "enable_zerocopy_send_client": false, 00:16:51.439 "zerocopy_threshold": 0, 00:16:51.439 "tls_version": 0, 00:16:51.439 "enable_ktls": false 00:16:51.439 } 00:16:51.439 }, 00:16:51.439 { 00:16:51.439 "method": "sock_impl_set_options", 00:16:51.439 "params": { 00:16:51.439 "impl_name": "posix", 00:16:51.439 "recv_buf_size": 2097152, 00:16:51.439 "send_buf_size": 2097152, 00:16:51.439 "enable_recv_pipe": true, 00:16:51.439 "enable_quickack": false, 00:16:51.439 "enable_placement_id": 0, 00:16:51.439 "enable_zerocopy_send_server": true, 00:16:51.439 "enable_zerocopy_send_client": false, 00:16:51.439 "zerocopy_threshold": 0, 00:16:51.439 "tls_version": 0, 00:16:51.439 "enable_ktls": false 00:16:51.439 } 00:16:51.439 } 00:16:51.439 ] 00:16:51.439 }, 00:16:51.439 { 00:16:51.439 "subsystem": "vmd", 00:16:51.439 "config": [] 00:16:51.439 }, 00:16:51.439 { 00:16:51.439 "subsystem": "accel", 00:16:51.439 "config": [ 00:16:51.439 { 00:16:51.439 "method": "accel_set_options", 00:16:51.439 "params": { 00:16:51.439 "small_cache_size": 128, 00:16:51.439 "large_cache_size": 16, 00:16:51.439 "task_count": 2048, 00:16:51.439 "sequence_count": 2048, 00:16:51.439 "buf_count": 2048 00:16:51.439 } 00:16:51.439 } 00:16:51.439 ] 00:16:51.439 }, 00:16:51.439 { 00:16:51.439 "subsystem": "bdev", 00:16:51.439 "config": [ 00:16:51.439 { 00:16:51.439 "method": "bdev_set_options", 00:16:51.439 "params": { 00:16:51.439 "bdev_io_pool_size": 65535, 00:16:51.439 "bdev_io_cache_size": 256, 00:16:51.439 "bdev_auto_examine": true, 00:16:51.439 "iobuf_small_cache_size": 128, 00:16:51.439 "iobuf_large_cache_size": 16 00:16:51.439 } 00:16:51.439 }, 00:16:51.439 { 00:16:51.439 "method": "bdev_raid_set_options", 00:16:51.439 "params": { 00:16:51.439 "process_window_size_kb": 1024, 00:16:51.439 "process_max_bandwidth_mb_sec": 0 00:16:51.439 } 00:16:51.439 }, 00:16:51.439 { 00:16:51.439 "method": "bdev_iscsi_set_options", 00:16:51.439 "params": { 00:16:51.439 "timeout_sec": 30 00:16:51.439 } 00:16:51.439 }, 00:16:51.439 { 00:16:51.439 "method": "bdev_nvme_set_options", 00:16:51.439 "params": { 00:16:51.439 "action_on_timeout": "none", 00:16:51.439 "timeout_us": 0, 00:16:51.439 "timeout_admin_us": 0, 00:16:51.439 "keep_alive_timeout_ms": 10000, 00:16:51.439 "arbitration_burst": 0, 00:16:51.439 "low_priority_weight": 0, 00:16:51.439 "medium_priority_weight": 0, 00:16:51.439 "high_priority_weight": 0, 00:16:51.439 "nvme_adminq_poll_period_us": 10000, 00:16:51.439 "nvme_ioq_poll_period_us": 0, 00:16:51.439 "io_queue_requests": 512, 00:16:51.439 "delay_cmd_submit": true, 00:16:51.439 "transport_retry_count": 4, 00:16:51.439 "bdev_retry_count": 3, 00:16:51.439 "transport_ack_timeout": 0, 00:16:51.439 "ctrlr_loss_timeout_sec": 0, 00:16:51.439 "reconnect_delay_sec": 0, 00:16:51.439 "fast_io_fail_timeout_sec": 0, 00:16:51.439 "disable_auto_failback": false, 00:16:51.439 "generate_uuids": false, 00:16:51.439 "transport_tos": 0, 00:16:51.439 "nvme_error_stat": false, 00:16:51.439 "rdma_srq_size": 0, 00:16:51.439 "io_path_stat": false, 00:16:51.439 "allow_accel_sequence": false, 00:16:51.439 "rdma_max_cq_size": 0, 00:16:51.439 "rdma_cm_event_timeout_ms": 0, 00:16:51.439 "dhchap_digests": [ 00:16:51.439 "sha256", 00:16:51.439 "sha384", 00:16:51.439 "sha512" 00:16:51.439 ], 00:16:51.439 "dhchap_dhgroups": [ 00:16:51.439 "null", 00:16:51.439 "ffdhe2048", 00:16:51.439 "ffdhe3072", 00:16:51.439 "ffdhe4096", 00:16:51.439 "ffdhe6144", 00:16:51.439 "ffdhe8192" 00:16:51.439 ] 00:16:51.439 } 00:16:51.439 }, 00:16:51.439 { 00:16:51.439 "method": "bdev_nvme_attach_controller", 00:16:51.439 "params": { 00:16:51.439 "name": "TLSTEST", 00:16:51.439 "trtype": "TCP", 00:16:51.439 "adrfam": "IPv4", 00:16:51.439 "traddr": "10.0.0.2", 00:16:51.439 "trsvcid": "4420", 00:16:51.439 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.439 "prchk_reftag": false, 00:16:51.439 "prchk_guard": false, 00:16:51.439 "ctrlr_loss_timeout_sec": 0, 00:16:51.439 "reconnect_delay_sec": 0, 00:16:51.439 "fast_io_fail_timeout_sec": 0, 00:16:51.439 "psk": "key0", 00:16:51.439 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:51.440 "hdgst": false, 00:16:51.440 "ddgst": false, 00:16:51.440 "multipath": "multipath" 00:16:51.440 } 00:16:51.440 }, 00:16:51.440 { 00:16:51.440 "method": "bdev_nvme_set_hotplug", 00:16:51.440 "params": { 00:16:51.440 "period_us": 100000, 00:16:51.440 "enable": false 00:16:51.440 } 00:16:51.440 }, 00:16:51.440 { 00:16:51.440 "method": "bdev_wait_for_examine" 00:16:51.440 } 00:16:51.440 ] 00:16:51.440 }, 00:16:51.440 { 00:16:51.440 "subsystem": "nbd", 00:16:51.440 "config": [] 00:16:51.440 } 00:16:51.440 ] 00:16:51.440 }' 00:16:51.440 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 871858 00:16:51.440 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 871858 ']' 00:16:51.440 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 871858 00:16:51.440 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:16:51.440 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:51.440 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 871858 00:16:51.440 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:16:51.440 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:16:51.440 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 871858' 00:16:51.440 killing process with pid 871858 00:16:51.440 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 871858 00:16:51.440 Received shutdown signal, test time was about 10.000000 seconds 00:16:51.440 00:16:51.440 Latency(us) 00:16:51.440 [2024-11-06T13:00:30.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:51.440 [2024-11-06T13:00:30.724Z] =================================================================================================================== 00:16:51.440 [2024-11-06T13:00:30.724Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:51.440 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 871858 00:16:51.440 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 871517 00:16:51.440 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 871517 ']' 00:16:51.440 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 871517 00:16:51.440 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:16:51.440 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:51.440 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 871517 00:16:51.800 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:16:51.800 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:16:51.800 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 871517' 00:16:51.800 killing process with pid 871517 00:16:51.800 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 871517 00:16:51.800 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 871517 00:16:51.800 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:16:51.800 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:51.800 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:51.800 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:51.800 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:16:51.800 "subsystems": [ 00:16:51.800 { 00:16:51.800 "subsystem": "keyring", 00:16:51.800 "config": [ 00:16:51.800 { 00:16:51.800 "method": "keyring_file_add_key", 00:16:51.800 "params": { 00:16:51.800 "name": "key0", 00:16:51.800 "path": "/tmp/tmp.KSrNGHk957" 00:16:51.800 } 00:16:51.800 } 00:16:51.800 ] 00:16:51.800 }, 00:16:51.800 { 00:16:51.800 "subsystem": "iobuf", 00:16:51.800 "config": [ 00:16:51.800 { 00:16:51.800 "method": "iobuf_set_options", 00:16:51.800 "params": { 00:16:51.800 "small_pool_count": 8192, 00:16:51.800 "large_pool_count": 1024, 00:16:51.800 "small_bufsize": 8192, 00:16:51.800 "large_bufsize": 135168, 00:16:51.800 "enable_numa": false 00:16:51.800 } 00:16:51.800 } 00:16:51.800 ] 00:16:51.800 }, 00:16:51.800 { 00:16:51.800 "subsystem": "sock", 00:16:51.800 "config": [ 00:16:51.800 { 00:16:51.800 "method": "sock_set_default_impl", 00:16:51.800 "params": { 00:16:51.800 "impl_name": "posix" 00:16:51.800 } 00:16:51.800 }, 00:16:51.800 { 00:16:51.800 "method": "sock_impl_set_options", 00:16:51.800 "params": { 00:16:51.800 "impl_name": "ssl", 00:16:51.800 "recv_buf_size": 4096, 00:16:51.800 "send_buf_size": 4096, 00:16:51.800 "enable_recv_pipe": true, 00:16:51.800 "enable_quickack": false, 00:16:51.800 "enable_placement_id": 0, 00:16:51.800 "enable_zerocopy_send_server": true, 00:16:51.800 "enable_zerocopy_send_client": false, 00:16:51.800 "zerocopy_threshold": 0, 00:16:51.800 "tls_version": 0, 00:16:51.800 "enable_ktls": false 00:16:51.800 } 00:16:51.800 }, 00:16:51.800 { 00:16:51.800 "method": "sock_impl_set_options", 00:16:51.800 "params": { 00:16:51.800 "impl_name": "posix", 00:16:51.800 "recv_buf_size": 2097152, 00:16:51.800 "send_buf_size": 2097152, 00:16:51.800 "enable_recv_pipe": true, 00:16:51.800 "enable_quickack": false, 00:16:51.800 "enable_placement_id": 0, 00:16:51.800 "enable_zerocopy_send_server": true, 00:16:51.800 "enable_zerocopy_send_client": false, 00:16:51.800 "zerocopy_threshold": 0, 00:16:51.800 "tls_version": 0, 00:16:51.800 "enable_ktls": false 00:16:51.800 } 00:16:51.800 } 00:16:51.800 ] 00:16:51.800 }, 00:16:51.800 { 00:16:51.800 "subsystem": "vmd", 00:16:51.800 "config": [] 00:16:51.800 }, 00:16:51.800 { 00:16:51.800 "subsystem": "accel", 00:16:51.800 "config": [ 00:16:51.800 { 00:16:51.800 "method": "accel_set_options", 00:16:51.800 "params": { 00:16:51.800 "small_cache_size": 128, 00:16:51.800 "large_cache_size": 16, 00:16:51.800 "task_count": 2048, 00:16:51.800 "sequence_count": 2048, 00:16:51.800 "buf_count": 2048 00:16:51.800 } 00:16:51.800 } 00:16:51.800 ] 00:16:51.800 }, 00:16:51.800 { 00:16:51.800 "subsystem": "bdev", 00:16:51.800 "config": [ 00:16:51.800 { 00:16:51.800 "method": "bdev_set_options", 00:16:51.800 "params": { 00:16:51.800 "bdev_io_pool_size": 65535, 00:16:51.800 "bdev_io_cache_size": 256, 00:16:51.800 "bdev_auto_examine": true, 00:16:51.800 "iobuf_small_cache_size": 128, 00:16:51.800 "iobuf_large_cache_size": 16 00:16:51.800 } 00:16:51.800 }, 00:16:51.800 { 00:16:51.800 "method": "bdev_raid_set_options", 00:16:51.800 "params": { 00:16:51.800 "process_window_size_kb": 1024, 00:16:51.800 "process_max_bandwidth_mb_sec": 0 00:16:51.800 } 00:16:51.800 }, 00:16:51.800 { 00:16:51.800 "method": "bdev_iscsi_set_options", 00:16:51.800 "params": { 00:16:51.800 "timeout_sec": 30 00:16:51.800 } 00:16:51.800 }, 00:16:51.800 { 00:16:51.800 "method": "bdev_nvme_set_options", 00:16:51.800 "params": { 00:16:51.800 "action_on_timeout": "none", 00:16:51.800 "timeout_us": 0, 00:16:51.800 "timeout_admin_us": 0, 00:16:51.800 "keep_alive_timeout_ms": 10000, 00:16:51.800 "arbitration_burst": 0, 00:16:51.800 "low_priority_weight": 0, 00:16:51.800 "medium_priority_weight": 0, 00:16:51.800 "high_priority_weight": 0, 00:16:51.800 "nvme_adminq_poll_period_us": 10000, 00:16:51.800 "nvme_ioq_poll_period_us": 0, 00:16:51.800 "io_queue_requests": 0, 00:16:51.800 "delay_cmd_submit": true, 00:16:51.800 "transport_retry_count": 4, 00:16:51.801 "bdev_retry_count": 3, 00:16:51.801 "transport_ack_timeout": 0, 00:16:51.801 "ctrlr_loss_timeout_sec": 0, 00:16:51.801 "reconnect_delay_sec": 0, 00:16:51.801 "fast_io_fail_timeout_sec": 0, 00:16:51.801 "disable_auto_failback": false, 00:16:51.801 "generate_uuids": false, 00:16:51.801 "transport_tos": 0, 00:16:51.801 "nvme_error_stat": false, 00:16:51.801 "rdma_srq_size": 0, 00:16:51.801 "io_path_stat": false, 00:16:51.801 "allow_accel_sequence": false, 00:16:51.801 "rdma_max_cq_size": 0, 00:16:51.801 "rdma_cm_event_timeout_ms": 0, 00:16:51.801 "dhchap_digests": [ 00:16:51.801 "sha256", 00:16:51.801 "sha384", 00:16:51.801 "sha512" 00:16:51.801 ], 00:16:51.801 "dhchap_dhgroups": [ 00:16:51.801 "null", 00:16:51.801 "ffdhe2048", 00:16:51.801 "ffdhe3072", 00:16:51.801 "ffdhe4096", 00:16:51.801 "ffdhe6144", 00:16:51.801 "ffdhe8192" 00:16:51.801 ] 00:16:51.801 } 00:16:51.801 }, 00:16:51.801 { 00:16:51.801 "method": "bdev_nvme_set_hotplug", 00:16:51.801 "params": { 00:16:51.801 "period_us": 100000, 00:16:51.801 "enable": false 00:16:51.801 } 00:16:51.801 }, 00:16:51.801 { 00:16:51.801 "method": "bdev_malloc_create", 00:16:51.801 "params": { 00:16:51.801 "name": "malloc0", 00:16:51.801 "num_blocks": 8192, 00:16:51.801 "block_size": 4096, 00:16:51.801 "physical_block_size": 4096, 00:16:51.801 "uuid": "88f5d978-1ba0-4119-aca9-b5e0eda47771", 00:16:51.801 "optimal_io_boundary": 0, 00:16:51.801 "md_size": 0, 00:16:51.801 "dif_type": 0, 00:16:51.801 "dif_is_head_of_md": false, 00:16:51.801 "dif_pi_format": 0 00:16:51.801 } 00:16:51.801 }, 00:16:51.801 { 00:16:51.801 "method": "bdev_wait_for_examine" 00:16:51.801 } 00:16:51.801 ] 00:16:51.801 }, 00:16:51.801 { 00:16:51.801 "subsystem": "nbd", 00:16:51.801 "config": [] 00:16:51.801 }, 00:16:51.801 { 00:16:51.801 "subsystem": "scheduler", 00:16:51.801 "config": [ 00:16:51.801 { 00:16:51.801 "method": "framework_set_scheduler", 00:16:51.801 "params": { 00:16:51.801 "name": "static" 00:16:51.801 } 00:16:51.801 } 00:16:51.801 ] 00:16:51.801 }, 00:16:51.801 { 00:16:51.801 "subsystem": "nvmf", 00:16:51.801 "config": [ 00:16:51.801 { 00:16:51.801 "method": "nvmf_set_config", 00:16:51.801 "params": { 00:16:51.801 "discovery_filter": "match_any", 00:16:51.801 "admin_cmd_passthru": { 00:16:51.801 "identify_ctrlr": false 00:16:51.801 }, 00:16:51.801 "dhchap_digests": [ 00:16:51.801 "sha256", 00:16:51.801 "sha384", 00:16:51.801 "sha512" 00:16:51.801 ], 00:16:51.801 "dhchap_dhgroups": [ 00:16:51.801 "null", 00:16:51.801 "ffdhe2048", 00:16:51.801 "ffdhe3072", 00:16:51.801 "ffdhe4096", 00:16:51.801 "ffdhe6144", 00:16:51.801 "ffdhe8192" 00:16:51.801 ] 00:16:51.801 } 00:16:51.801 }, 00:16:51.801 { 00:16:51.801 "method": "nvmf_set_max_subsystems", 00:16:51.801 "params": { 00:16:51.801 "max_subsystems": 1024 00:16:51.801 } 00:16:51.801 }, 00:16:51.801 { 00:16:51.801 "method": "nvmf_set_crdt", 00:16:51.801 "params": { 00:16:51.801 "crdt1": 0, 00:16:51.801 "crdt2": 0, 00:16:51.801 "crdt3": 0 00:16:51.801 } 00:16:51.801 }, 00:16:51.801 { 00:16:51.801 "method": "nvmf_create_transport", 00:16:51.801 "params": { 00:16:51.801 "trtype": "TCP", 00:16:51.801 "max_queue_depth": 128, 00:16:51.801 "max_io_qpairs_per_ctrlr": 127, 00:16:51.801 "in_capsule_data_size": 4096, 00:16:51.801 "max_io_size": 131072, 00:16:51.801 "io_unit_size": 131072, 00:16:51.801 "max_aq_depth": 128, 00:16:51.801 "num_shared_buffers": 511, 00:16:51.801 "buf_cache_size": 4294967295, 00:16:51.801 "dif_insert_or_strip": false, 00:16:51.801 "zcopy": false, 00:16:51.801 "c2h_success": false, 00:16:51.801 "sock_priority": 0, 00:16:51.801 "abort_timeout_sec": 1, 00:16:51.801 "ack_timeout": 0, 00:16:51.801 "data_wr_pool_size": 0 00:16:51.801 } 00:16:51.801 }, 00:16:51.801 { 00:16:51.801 "method": "nvmf_create_subsystem", 00:16:51.801 "params": { 00:16:51.801 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.801 "allow_any_host": false, 00:16:51.801 "serial_number": "SPDK00000000000001", 00:16:51.801 "model_number": "SPDK bdev Controller", 00:16:51.801 "max_namespaces": 10, 00:16:51.801 "min_cntlid": 1, 00:16:51.801 "max_cntlid": 65519, 00:16:51.801 "ana_reporting": false 00:16:51.801 } 00:16:51.801 }, 00:16:51.801 { 00:16:51.801 "method": "nvmf_subsystem_add_host", 00:16:51.801 "params": { 00:16:51.801 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.801 "host": "nqn.2016-06.io.spdk:host1", 00:16:51.801 "psk": "key0" 00:16:51.801 } 00:16:51.801 }, 00:16:51.801 { 00:16:51.801 "method": "nvmf_subsystem_add_ns", 00:16:51.801 "params": { 00:16:51.801 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.801 "namespace": { 00:16:51.801 "nsid": 1, 00:16:51.801 "bdev_name": "malloc0", 00:16:51.801 "nguid": "88F5D9781BA04119ACA9B5E0EDA47771", 00:16:51.801 "uuid": "88f5d978-1ba0-4119-aca9-b5e0eda47771", 00:16:51.801 "no_auto_visible": false 00:16:51.801 } 00:16:51.801 } 00:16:51.801 }, 00:16:51.801 { 00:16:51.801 "method": "nvmf_subsystem_add_listener", 00:16:51.801 "params": { 00:16:51.801 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.801 "listen_address": { 00:16:51.801 "trtype": "TCP", 00:16:51.801 "adrfam": "IPv4", 00:16:51.801 "traddr": "10.0.0.2", 00:16:51.801 "trsvcid": "4420" 00:16:51.801 }, 00:16:51.801 "secure_channel": true 00:16:51.801 } 00:16:51.801 } 00:16:51.801 ] 00:16:51.801 } 00:16:51.801 ] 00:16:51.801 }' 00:16:51.801 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=872214 00:16:51.801 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 872214 00:16:51.801 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 872214 ']' 00:16:51.801 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.801 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:51.801 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.801 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:51.801 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:51.801 14:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:16:51.801 [2024-11-06 14:00:30.877280] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:16:51.801 [2024-11-06 14:00:30.877335] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.801 [2024-11-06 14:00:30.947147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.801 [2024-11-06 14:00:30.975456] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.801 [2024-11-06 14:00:30.975483] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.801 [2024-11-06 14:00:30.975488] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:51.801 [2024-11-06 14:00:30.975493] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:51.801 [2024-11-06 14:00:30.975497] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.801 [2024-11-06 14:00:30.975945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.129 [2024-11-06 14:00:31.170089] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:52.129 [2024-11-06 14:00:31.202125] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:52.129 [2024-11-06 14:00:31.202333] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:52.413 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:52.413 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:16:52.413 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:52.413 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:52.413 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:52.413 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.413 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=872496 00:16:52.413 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 872496 /var/tmp/bdevperf.sock 00:16:52.413 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 872496 ']' 00:16:52.413 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:52.413 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:52.413 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:52.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:52.413 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:52.413 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:52.413 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:16:52.413 14:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:16:52.413 "subsystems": [ 00:16:52.413 { 00:16:52.413 "subsystem": "keyring", 00:16:52.413 "config": [ 00:16:52.413 { 00:16:52.413 "method": "keyring_file_add_key", 00:16:52.413 "params": { 00:16:52.413 "name": "key0", 00:16:52.413 "path": "/tmp/tmp.KSrNGHk957" 00:16:52.413 } 00:16:52.413 } 00:16:52.413 ] 00:16:52.413 }, 00:16:52.413 { 00:16:52.413 "subsystem": "iobuf", 00:16:52.413 "config": [ 00:16:52.413 { 00:16:52.413 "method": "iobuf_set_options", 00:16:52.413 "params": { 00:16:52.413 "small_pool_count": 8192, 00:16:52.413 "large_pool_count": 1024, 00:16:52.413 "small_bufsize": 8192, 00:16:52.413 "large_bufsize": 135168, 00:16:52.413 "enable_numa": false 00:16:52.413 } 00:16:52.413 } 00:16:52.413 ] 00:16:52.413 }, 00:16:52.413 { 00:16:52.413 "subsystem": "sock", 00:16:52.413 "config": [ 00:16:52.413 { 00:16:52.413 "method": "sock_set_default_impl", 00:16:52.413 "params": { 00:16:52.413 "impl_name": "posix" 00:16:52.413 } 00:16:52.413 }, 00:16:52.413 { 00:16:52.413 "method": "sock_impl_set_options", 00:16:52.413 "params": { 00:16:52.413 "impl_name": "ssl", 00:16:52.413 "recv_buf_size": 4096, 00:16:52.413 "send_buf_size": 4096, 00:16:52.413 "enable_recv_pipe": true, 00:16:52.413 "enable_quickack": false, 00:16:52.413 "enable_placement_id": 0, 00:16:52.413 "enable_zerocopy_send_server": true, 00:16:52.413 "enable_zerocopy_send_client": false, 00:16:52.413 "zerocopy_threshold": 0, 00:16:52.413 "tls_version": 0, 00:16:52.413 "enable_ktls": false 00:16:52.413 } 00:16:52.413 }, 00:16:52.413 { 00:16:52.413 "method": "sock_impl_set_options", 00:16:52.413 "params": { 00:16:52.413 "impl_name": "posix", 00:16:52.413 "recv_buf_size": 2097152, 00:16:52.413 "send_buf_size": 2097152, 00:16:52.413 "enable_recv_pipe": true, 00:16:52.413 "enable_quickack": false, 00:16:52.413 "enable_placement_id": 0, 00:16:52.413 "enable_zerocopy_send_server": true, 00:16:52.413 "enable_zerocopy_send_client": false, 00:16:52.413 "zerocopy_threshold": 0, 00:16:52.413 "tls_version": 0, 00:16:52.413 "enable_ktls": false 00:16:52.413 } 00:16:52.413 } 00:16:52.413 ] 00:16:52.413 }, 00:16:52.413 { 00:16:52.413 "subsystem": "vmd", 00:16:52.413 "config": [] 00:16:52.413 }, 00:16:52.413 { 00:16:52.413 "subsystem": "accel", 00:16:52.413 "config": [ 00:16:52.413 { 00:16:52.413 "method": "accel_set_options", 00:16:52.413 "params": { 00:16:52.413 "small_cache_size": 128, 00:16:52.413 "large_cache_size": 16, 00:16:52.413 "task_count": 2048, 00:16:52.413 "sequence_count": 2048, 00:16:52.413 "buf_count": 2048 00:16:52.413 } 00:16:52.413 } 00:16:52.413 ] 00:16:52.413 }, 00:16:52.413 { 00:16:52.413 "subsystem": "bdev", 00:16:52.413 "config": [ 00:16:52.413 { 00:16:52.413 "method": "bdev_set_options", 00:16:52.413 "params": { 00:16:52.413 "bdev_io_pool_size": 65535, 00:16:52.413 "bdev_io_cache_size": 256, 00:16:52.413 "bdev_auto_examine": true, 00:16:52.413 "iobuf_small_cache_size": 128, 00:16:52.413 "iobuf_large_cache_size": 16 00:16:52.413 } 00:16:52.413 }, 00:16:52.413 { 00:16:52.413 "method": "bdev_raid_set_options", 00:16:52.413 "params": { 00:16:52.413 "process_window_size_kb": 1024, 00:16:52.413 "process_max_bandwidth_mb_sec": 0 00:16:52.413 } 00:16:52.413 }, 00:16:52.413 { 00:16:52.413 "method": "bdev_iscsi_set_options", 00:16:52.413 "params": { 00:16:52.413 "timeout_sec": 30 00:16:52.413 } 00:16:52.413 }, 00:16:52.413 { 00:16:52.413 "method": "bdev_nvme_set_options", 00:16:52.413 "params": { 00:16:52.413 "action_on_timeout": "none", 00:16:52.413 "timeout_us": 0, 00:16:52.413 "timeout_admin_us": 0, 00:16:52.413 "keep_alive_timeout_ms": 10000, 00:16:52.413 "arbitration_burst": 0, 00:16:52.413 "low_priority_weight": 0, 00:16:52.413 "medium_priority_weight": 0, 00:16:52.413 "high_priority_weight": 0, 00:16:52.413 "nvme_adminq_poll_period_us": 10000, 00:16:52.413 "nvme_ioq_poll_period_us": 0, 00:16:52.413 "io_queue_requests": 512, 00:16:52.413 "delay_cmd_submit": true, 00:16:52.413 "transport_retry_count": 4, 00:16:52.413 "bdev_retry_count": 3, 00:16:52.413 "transport_ack_timeout": 0, 00:16:52.413 "ctrlr_loss_timeout_sec": 0, 00:16:52.413 "reconnect_delay_sec": 0, 00:16:52.413 "fast_io_fail_timeout_sec": 0, 00:16:52.413 "disable_auto_failback": false, 00:16:52.413 "generate_uuids": false, 00:16:52.413 "transport_tos": 0, 00:16:52.413 "nvme_error_stat": false, 00:16:52.413 "rdma_srq_size": 0, 00:16:52.414 "io_path_stat": false, 00:16:52.414 "allow_accel_sequence": false, 00:16:52.414 "rdma_max_cq_size": 0, 00:16:52.414 "rdma_cm_event_timeout_ms": 0, 00:16:52.414 "dhchap_digests": [ 00:16:52.414 "sha256", 00:16:52.414 "sha384", 00:16:52.414 "sha512" 00:16:52.414 ], 00:16:52.414 "dhchap_dhgroups": [ 00:16:52.414 "null", 00:16:52.414 "ffdhe2048", 00:16:52.414 "ffdhe3072", 00:16:52.414 "ffdhe4096", 00:16:52.414 "ffdhe6144", 00:16:52.414 "ffdhe8192" 00:16:52.414 ] 00:16:52.414 } 00:16:52.414 }, 00:16:52.414 { 00:16:52.414 "method": "bdev_nvme_attach_controller", 00:16:52.414 "params": { 00:16:52.414 "name": "TLSTEST", 00:16:52.414 "trtype": "TCP", 00:16:52.414 "adrfam": "IPv4", 00:16:52.414 "traddr": "10.0.0.2", 00:16:52.414 "trsvcid": "4420", 00:16:52.414 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:52.414 "prchk_reftag": false, 00:16:52.414 "prchk_guard": false, 00:16:52.414 "ctrlr_loss_timeout_sec": 0, 00:16:52.414 "reconnect_delay_sec": 0, 00:16:52.414 "fast_io_fail_timeout_sec": 0, 00:16:52.414 "psk": "key0", 00:16:52.414 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:52.414 "hdgst": false, 00:16:52.414 "ddgst": false, 00:16:52.414 "multipath": "multipath" 00:16:52.414 } 00:16:52.414 }, 00:16:52.414 { 00:16:52.414 "method": "bdev_nvme_set_hotplug", 00:16:52.414 "params": { 00:16:52.414 "period_us": 100000, 00:16:52.414 "enable": false 00:16:52.414 } 00:16:52.414 }, 00:16:52.414 { 00:16:52.414 "method": "bdev_wait_for_examine" 00:16:52.414 } 00:16:52.414 ] 00:16:52.414 }, 00:16:52.414 { 00:16:52.414 "subsystem": "nbd", 00:16:52.414 "config": [] 00:16:52.414 } 00:16:52.414 ] 00:16:52.414 }' 00:16:52.671 [2024-11-06 14:00:31.700378] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:16:52.671 [2024-11-06 14:00:31.700429] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid872496 ] 00:16:52.671 [2024-11-06 14:00:31.765539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.671 [2024-11-06 14:00:31.794427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:52.671 [2024-11-06 14:00:31.929508] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:53.238 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:53.238 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:16:53.239 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:53.498 Running I/O for 10 seconds... 00:16:55.369 3257.00 IOPS, 12.72 MiB/s [2024-11-06T13:00:35.589Z] 2483.00 IOPS, 9.70 MiB/s [2024-11-06T13:00:36.964Z] 2398.33 IOPS, 9.37 MiB/s [2024-11-06T13:00:37.900Z] 2472.75 IOPS, 9.66 MiB/s [2024-11-06T13:00:38.837Z] 2746.20 IOPS, 10.73 MiB/s [2024-11-06T13:00:39.773Z] 2581.83 IOPS, 10.09 MiB/s [2024-11-06T13:00:40.710Z] 2430.86 IOPS, 9.50 MiB/s [2024-11-06T13:00:41.646Z] 2365.62 IOPS, 9.24 MiB/s [2024-11-06T13:00:42.583Z] 2423.00 IOPS, 9.46 MiB/s [2024-11-06T13:00:42.842Z] 2337.70 IOPS, 9.13 MiB/s 00:17:03.558 Latency(us) 00:17:03.558 [2024-11-06T13:00:42.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.558 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:03.558 Verification LBA range: start 0x0 length 0x2000 00:17:03.558 TLSTESTn1 : 10.09 2329.15 9.10 0.00 0.00 54743.85 5297.49 147674.45 00:17:03.558 [2024-11-06T13:00:42.842Z] =================================================================================================================== 00:17:03.558 [2024-11-06T13:00:42.842Z] Total : 2329.15 9.10 0.00 0.00 54743.85 5297.49 147674.45 00:17:03.558 { 00:17:03.558 "results": [ 00:17:03.558 { 00:17:03.558 "job": "TLSTESTn1", 00:17:03.558 "core_mask": "0x4", 00:17:03.558 "workload": "verify", 00:17:03.558 "status": "finished", 00:17:03.558 "verify_range": { 00:17:03.558 "start": 0, 00:17:03.558 "length": 8192 00:17:03.558 }, 00:17:03.558 "queue_depth": 128, 00:17:03.558 "io_size": 4096, 00:17:03.558 "runtime": 10.091661, 00:17:03.558 "iops": 2329.1507711168656, 00:17:03.558 "mibps": 9.098245199675256, 00:17:03.558 "io_failed": 0, 00:17:03.558 "io_timeout": 0, 00:17:03.558 "avg_latency_us": 54743.85350748068, 00:17:03.558 "min_latency_us": 5297.493333333333, 00:17:03.558 "max_latency_us": 147674.45333333334 00:17:03.558 } 00:17:03.558 ], 00:17:03.558 "core_count": 1 00:17:03.558 } 00:17:03.558 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:03.558 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 872496 00:17:03.558 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 872496 ']' 00:17:03.558 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 872496 00:17:03.558 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:17:03.558 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:03.558 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 872496 00:17:03.558 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:17:03.558 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:17:03.558 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 872496' 00:17:03.558 killing process with pid 872496 00:17:03.558 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 872496 00:17:03.558 Received shutdown signal, test time was about 10.000000 seconds 00:17:03.558 00:17:03.558 Latency(us) 00:17:03.558 [2024-11-06T13:00:42.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.558 [2024-11-06T13:00:42.842Z] =================================================================================================================== 00:17:03.558 [2024-11-06T13:00:42.842Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:03.558 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 872496 00:17:03.558 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 872214 00:17:03.558 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 872214 ']' 00:17:03.558 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 872214 00:17:03.558 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:17:03.558 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:03.558 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 872214 00:17:03.817 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:03.817 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:03.817 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 872214' 00:17:03.817 killing process with pid 872214 00:17:03.817 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 872214 00:17:03.817 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 872214 00:17:03.817 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:17:03.817 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:03.817 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:03.817 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:03.817 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=874905 00:17:03.817 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:03.817 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 874905 00:17:03.817 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 874905 ']' 00:17:03.817 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.817 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:03.817 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.817 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:03.817 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:03.817 [2024-11-06 14:00:43.010171] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:17:03.817 [2024-11-06 14:00:43.010226] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.817 [2024-11-06 14:00:43.092524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.076 [2024-11-06 14:00:43.127129] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:04.076 [2024-11-06 14:00:43.127162] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:04.076 [2024-11-06 14:00:43.127171] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:04.076 [2024-11-06 14:00:43.127179] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:04.076 [2024-11-06 14:00:43.127187] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:04.076 [2024-11-06 14:00:43.127753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.643 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:04.644 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:17:04.644 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:04.644 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:04.644 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:04.644 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:04.644 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.KSrNGHk957 00:17:04.644 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.KSrNGHk957 00:17:04.644 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:04.903 [2024-11-06 14:00:43.965776] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:04.903 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:04.903 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:05.162 [2024-11-06 14:00:44.278554] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:05.162 [2024-11-06 14:00:44.278793] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:05.162 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:05.162 malloc0 00:17:05.421 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:05.421 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.KSrNGHk957 00:17:05.681 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:05.681 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=875273 00:17:05.681 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:05.681 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 875273 /var/tmp/bdevperf.sock 00:17:05.681 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 875273 ']' 00:17:05.681 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:05.681 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:05.681 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:05.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:05.681 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:05.681 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:05.681 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:05.681 [2024-11-06 14:00:44.937427] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:17:05.681 [2024-11-06 14:00:44.937482] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid875273 ] 00:17:05.941 [2024-11-06 14:00:45.001407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.941 [2024-11-06 14:00:45.031097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.941 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:05.941 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:17:05.941 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KSrNGHk957 00:17:06.200 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:06.200 [2024-11-06 14:00:45.390488] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:06.200 nvme0n1 00:17:06.200 14:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:06.461 Running I/O for 1 seconds... 00:17:07.398 1453.00 IOPS, 5.68 MiB/s 00:17:07.398 Latency(us) 00:17:07.398 [2024-11-06T13:00:46.682Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.398 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:07.398 Verification LBA range: start 0x0 length 0x2000 00:17:07.398 nvme0n1 : 1.09 1456.59 5.69 0.00 0.00 85137.60 5106.35 167772.16 00:17:07.398 [2024-11-06T13:00:46.682Z] =================================================================================================================== 00:17:07.398 [2024-11-06T13:00:46.683Z] Total : 1456.59 5.69 0.00 0.00 85137.60 5106.35 167772.16 00:17:07.399 { 00:17:07.399 "results": [ 00:17:07.399 { 00:17:07.399 "job": "nvme0n1", 00:17:07.399 "core_mask": "0x2", 00:17:07.399 "workload": "verify", 00:17:07.399 "status": "finished", 00:17:07.399 "verify_range": { 00:17:07.399 "start": 0, 00:17:07.399 "length": 8192 00:17:07.399 }, 00:17:07.399 "queue_depth": 128, 00:17:07.399 "io_size": 4096, 00:17:07.399 "runtime": 1.086101, 00:17:07.399 "iops": 1456.5864500631158, 00:17:07.399 "mibps": 5.689790820559046, 00:17:07.399 "io_failed": 0, 00:17:07.399 "io_timeout": 0, 00:17:07.399 "avg_latency_us": 85137.60391066161, 00:17:07.399 "min_latency_us": 5106.346666666666, 00:17:07.399 "max_latency_us": 167772.16 00:17:07.399 } 00:17:07.399 ], 00:17:07.399 "core_count": 1 00:17:07.399 } 00:17:07.399 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 875273 00:17:07.399 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 875273 ']' 00:17:07.399 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 875273 00:17:07.399 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:17:07.399 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:07.399 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 875273 00:17:07.658 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:07.658 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:07.658 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 875273' 00:17:07.658 killing process with pid 875273 00:17:07.658 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 875273 00:17:07.658 Received shutdown signal, test time was about 1.000000 seconds 00:17:07.658 00:17:07.658 Latency(us) 00:17:07.658 [2024-11-06T13:00:46.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.658 [2024-11-06T13:00:46.942Z] =================================================================================================================== 00:17:07.658 [2024-11-06T13:00:46.942Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:07.658 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 875273 00:17:07.658 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 874905 00:17:07.658 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 874905 ']' 00:17:07.658 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 874905 00:17:07.658 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:17:07.658 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:07.658 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 874905 00:17:07.658 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:07.658 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:07.658 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 874905' 00:17:07.658 killing process with pid 874905 00:17:07.658 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 874905 00:17:07.658 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 874905 00:17:07.917 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:17:07.917 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:07.917 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:07.917 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:07.917 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=875890 00:17:07.918 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:07.918 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 875890 00:17:07.918 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 875890 ']' 00:17:07.918 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.918 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:07.918 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.918 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:07.918 14:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:07.918 [2024-11-06 14:00:47.011795] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:17:07.918 [2024-11-06 14:00:47.011851] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:07.918 [2024-11-06 14:00:47.094540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.918 [2024-11-06 14:00:47.132105] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:07.918 [2024-11-06 14:00:47.132144] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:07.918 [2024-11-06 14:00:47.132152] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:07.918 [2024-11-06 14:00:47.132158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:07.918 [2024-11-06 14:00:47.132164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:07.918 [2024-11-06 14:00:47.132774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.857 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:08.857 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:17:08.857 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:08.857 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:08.857 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:08.857 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.857 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:17:08.857 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.857 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:08.857 [2024-11-06 14:00:47.821568] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:08.857 malloc0 00:17:08.857 [2024-11-06 14:00:47.847475] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:08.857 [2024-11-06 14:00:47.847675] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:08.857 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.857 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=875972 00:17:08.857 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 875972 /var/tmp/bdevperf.sock 00:17:08.857 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 875972 ']' 00:17:08.857 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:08.857 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:08.857 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:08.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:08.857 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:08.857 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:08.857 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:08.857 [2024-11-06 14:00:47.909153] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:17:08.857 [2024-11-06 14:00:47.909200] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid875972 ] 00:17:08.857 [2024-11-06 14:00:47.973306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.857 [2024-11-06 14:00:48.002980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:08.857 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:08.857 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:17:08.857 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KSrNGHk957 00:17:09.116 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:09.116 [2024-11-06 14:00:48.362513] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:09.375 nvme0n1 00:17:09.375 14:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:09.375 Running I/O for 1 seconds... 00:17:10.311 1941.00 IOPS, 7.58 MiB/s 00:17:10.311 Latency(us) 00:17:10.311 [2024-11-06T13:00:49.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.311 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:10.311 Verification LBA range: start 0x0 length 0x2000 00:17:10.311 nvme0n1 : 1.06 1946.03 7.60 0.00 0.00 64249.18 3904.85 107915.95 00:17:10.311 [2024-11-06T13:00:49.595Z] =================================================================================================================== 00:17:10.311 [2024-11-06T13:00:49.595Z] Total : 1946.03 7.60 0.00 0.00 64249.18 3904.85 107915.95 00:17:10.311 { 00:17:10.311 "results": [ 00:17:10.311 { 00:17:10.311 "job": "nvme0n1", 00:17:10.311 "core_mask": "0x2", 00:17:10.311 "workload": "verify", 00:17:10.311 "status": "finished", 00:17:10.311 "verify_range": { 00:17:10.311 "start": 0, 00:17:10.311 "length": 8192 00:17:10.311 }, 00:17:10.311 "queue_depth": 128, 00:17:10.311 "io_size": 4096, 00:17:10.311 "runtime": 1.063703, 00:17:10.311 "iops": 1946.0319280851893, 00:17:10.311 "mibps": 7.601687219082771, 00:17:10.311 "io_failed": 0, 00:17:10.311 "io_timeout": 0, 00:17:10.311 "avg_latency_us": 64249.18322705314, 00:17:10.311 "min_latency_us": 3904.8533333333335, 00:17:10.311 "max_latency_us": 107915.94666666667 00:17:10.311 } 00:17:10.311 ], 00:17:10.311 "core_count": 1 00:17:10.311 } 00:17:10.311 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:17:10.311 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.311 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:10.569 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.569 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:17:10.569 "subsystems": [ 00:17:10.569 { 00:17:10.569 "subsystem": "keyring", 00:17:10.569 "config": [ 00:17:10.569 { 00:17:10.569 "method": "keyring_file_add_key", 00:17:10.569 "params": { 00:17:10.569 "name": "key0", 00:17:10.569 "path": "/tmp/tmp.KSrNGHk957" 00:17:10.569 } 00:17:10.569 } 00:17:10.569 ] 00:17:10.569 }, 00:17:10.569 { 00:17:10.569 "subsystem": "iobuf", 00:17:10.569 "config": [ 00:17:10.569 { 00:17:10.569 "method": "iobuf_set_options", 00:17:10.569 "params": { 00:17:10.569 "small_pool_count": 8192, 00:17:10.569 "large_pool_count": 1024, 00:17:10.569 "small_bufsize": 8192, 00:17:10.569 "large_bufsize": 135168, 00:17:10.569 "enable_numa": false 00:17:10.569 } 00:17:10.569 } 00:17:10.569 ] 00:17:10.569 }, 00:17:10.569 { 00:17:10.569 "subsystem": "sock", 00:17:10.569 "config": [ 00:17:10.569 { 00:17:10.569 "method": "sock_set_default_impl", 00:17:10.569 "params": { 00:17:10.569 "impl_name": "posix" 00:17:10.569 } 00:17:10.569 }, 00:17:10.570 { 00:17:10.570 "method": "sock_impl_set_options", 00:17:10.570 "params": { 00:17:10.570 "impl_name": "ssl", 00:17:10.570 "recv_buf_size": 4096, 00:17:10.570 "send_buf_size": 4096, 00:17:10.570 "enable_recv_pipe": true, 00:17:10.570 "enable_quickack": false, 00:17:10.570 "enable_placement_id": 0, 00:17:10.570 "enable_zerocopy_send_server": true, 00:17:10.570 "enable_zerocopy_send_client": false, 00:17:10.570 "zerocopy_threshold": 0, 00:17:10.570 "tls_version": 0, 00:17:10.570 "enable_ktls": false 00:17:10.570 } 00:17:10.570 }, 00:17:10.570 { 00:17:10.570 "method": "sock_impl_set_options", 00:17:10.570 "params": { 00:17:10.570 "impl_name": "posix", 00:17:10.570 "recv_buf_size": 2097152, 00:17:10.570 "send_buf_size": 2097152, 00:17:10.570 "enable_recv_pipe": true, 00:17:10.570 "enable_quickack": false, 00:17:10.570 "enable_placement_id": 0, 00:17:10.570 "enable_zerocopy_send_server": true, 00:17:10.570 "enable_zerocopy_send_client": false, 00:17:10.570 "zerocopy_threshold": 0, 00:17:10.570 "tls_version": 0, 00:17:10.570 "enable_ktls": false 00:17:10.570 } 00:17:10.570 } 00:17:10.570 ] 00:17:10.570 }, 00:17:10.570 { 00:17:10.570 "subsystem": "vmd", 00:17:10.570 "config": [] 00:17:10.570 }, 00:17:10.570 { 00:17:10.570 "subsystem": "accel", 00:17:10.570 "config": [ 00:17:10.570 { 00:17:10.570 "method": "accel_set_options", 00:17:10.570 "params": { 00:17:10.570 "small_cache_size": 128, 00:17:10.570 "large_cache_size": 16, 00:17:10.570 "task_count": 2048, 00:17:10.570 "sequence_count": 2048, 00:17:10.570 "buf_count": 2048 00:17:10.570 } 00:17:10.570 } 00:17:10.570 ] 00:17:10.570 }, 00:17:10.570 { 00:17:10.570 "subsystem": "bdev", 00:17:10.570 "config": [ 00:17:10.570 { 00:17:10.570 "method": "bdev_set_options", 00:17:10.570 "params": { 00:17:10.570 "bdev_io_pool_size": 65535, 00:17:10.570 "bdev_io_cache_size": 256, 00:17:10.570 "bdev_auto_examine": true, 00:17:10.570 "iobuf_small_cache_size": 128, 00:17:10.570 "iobuf_large_cache_size": 16 00:17:10.570 } 00:17:10.570 }, 00:17:10.570 { 00:17:10.570 "method": "bdev_raid_set_options", 00:17:10.570 "params": { 00:17:10.570 "process_window_size_kb": 1024, 00:17:10.570 "process_max_bandwidth_mb_sec": 0 00:17:10.570 } 00:17:10.570 }, 00:17:10.570 { 00:17:10.570 "method": "bdev_iscsi_set_options", 00:17:10.570 "params": { 00:17:10.570 "timeout_sec": 30 00:17:10.570 } 00:17:10.570 }, 00:17:10.570 { 00:17:10.570 "method": "bdev_nvme_set_options", 00:17:10.570 "params": { 00:17:10.570 "action_on_timeout": "none", 00:17:10.570 "timeout_us": 0, 00:17:10.570 "timeout_admin_us": 0, 00:17:10.570 "keep_alive_timeout_ms": 10000, 00:17:10.570 "arbitration_burst": 0, 00:17:10.570 "low_priority_weight": 0, 00:17:10.570 "medium_priority_weight": 0, 00:17:10.570 "high_priority_weight": 0, 00:17:10.570 "nvme_adminq_poll_period_us": 10000, 00:17:10.570 "nvme_ioq_poll_period_us": 0, 00:17:10.570 "io_queue_requests": 0, 00:17:10.570 "delay_cmd_submit": true, 00:17:10.570 "transport_retry_count": 4, 00:17:10.570 "bdev_retry_count": 3, 00:17:10.570 "transport_ack_timeout": 0, 00:17:10.570 "ctrlr_loss_timeout_sec": 0, 00:17:10.570 "reconnect_delay_sec": 0, 00:17:10.570 "fast_io_fail_timeout_sec": 0, 00:17:10.570 "disable_auto_failback": false, 00:17:10.570 "generate_uuids": false, 00:17:10.570 "transport_tos": 0, 00:17:10.570 "nvme_error_stat": false, 00:17:10.570 "rdma_srq_size": 0, 00:17:10.570 "io_path_stat": false, 00:17:10.570 "allow_accel_sequence": false, 00:17:10.570 "rdma_max_cq_size": 0, 00:17:10.570 "rdma_cm_event_timeout_ms": 0, 00:17:10.570 "dhchap_digests": [ 00:17:10.570 "sha256", 00:17:10.570 "sha384", 00:17:10.570 "sha512" 00:17:10.570 ], 00:17:10.570 "dhchap_dhgroups": [ 00:17:10.570 "null", 00:17:10.570 "ffdhe2048", 00:17:10.570 "ffdhe3072", 00:17:10.570 "ffdhe4096", 00:17:10.570 "ffdhe6144", 00:17:10.570 "ffdhe8192" 00:17:10.570 ] 00:17:10.570 } 00:17:10.570 }, 00:17:10.570 { 00:17:10.570 "method": "bdev_nvme_set_hotplug", 00:17:10.570 "params": { 00:17:10.570 "period_us": 100000, 00:17:10.570 "enable": false 00:17:10.570 } 00:17:10.570 }, 00:17:10.570 { 00:17:10.570 "method": "bdev_malloc_create", 00:17:10.570 "params": { 00:17:10.570 "name": "malloc0", 00:17:10.570 "num_blocks": 8192, 00:17:10.570 "block_size": 4096, 00:17:10.570 "physical_block_size": 4096, 00:17:10.570 "uuid": "4353696a-45d9-4fbf-9c7f-e40c57df2f65", 00:17:10.570 "optimal_io_boundary": 0, 00:17:10.570 "md_size": 0, 00:17:10.570 "dif_type": 0, 00:17:10.570 "dif_is_head_of_md": false, 00:17:10.570 "dif_pi_format": 0 00:17:10.570 } 00:17:10.570 }, 00:17:10.570 { 00:17:10.570 "method": "bdev_wait_for_examine" 00:17:10.570 } 00:17:10.570 ] 00:17:10.570 }, 00:17:10.570 { 00:17:10.570 "subsystem": "nbd", 00:17:10.570 "config": [] 00:17:10.570 }, 00:17:10.570 { 00:17:10.570 "subsystem": "scheduler", 00:17:10.570 "config": [ 00:17:10.570 { 00:17:10.570 "method": "framework_set_scheduler", 00:17:10.570 "params": { 00:17:10.570 "name": "static" 00:17:10.570 } 00:17:10.570 } 00:17:10.570 ] 00:17:10.570 }, 00:17:10.570 { 00:17:10.570 "subsystem": "nvmf", 00:17:10.570 "config": [ 00:17:10.570 { 00:17:10.570 "method": "nvmf_set_config", 00:17:10.570 "params": { 00:17:10.570 "discovery_filter": "match_any", 00:17:10.570 "admin_cmd_passthru": { 00:17:10.570 "identify_ctrlr": false 00:17:10.570 }, 00:17:10.570 "dhchap_digests": [ 00:17:10.570 "sha256", 00:17:10.570 "sha384", 00:17:10.570 "sha512" 00:17:10.570 ], 00:17:10.570 "dhchap_dhgroups": [ 00:17:10.570 "null", 00:17:10.570 "ffdhe2048", 00:17:10.570 "ffdhe3072", 00:17:10.570 "ffdhe4096", 00:17:10.570 "ffdhe6144", 00:17:10.570 "ffdhe8192" 00:17:10.570 ] 00:17:10.570 } 00:17:10.570 }, 00:17:10.570 { 00:17:10.570 "method": "nvmf_set_max_subsystems", 00:17:10.570 "params": { 00:17:10.570 "max_subsystems": 1024 00:17:10.570 } 00:17:10.570 }, 00:17:10.570 { 00:17:10.570 "method": "nvmf_set_crdt", 00:17:10.570 "params": { 00:17:10.570 "crdt1": 0, 00:17:10.570 "crdt2": 0, 00:17:10.570 "crdt3": 0 00:17:10.570 } 00:17:10.570 }, 00:17:10.570 { 00:17:10.570 "method": "nvmf_create_transport", 00:17:10.570 "params": { 00:17:10.570 "trtype": "TCP", 00:17:10.570 "max_queue_depth": 128, 00:17:10.570 "max_io_qpairs_per_ctrlr": 127, 00:17:10.570 "in_capsule_data_size": 4096, 00:17:10.570 "max_io_size": 131072, 00:17:10.570 "io_unit_size": 131072, 00:17:10.570 "max_aq_depth": 128, 00:17:10.570 "num_shared_buffers": 511, 00:17:10.570 "buf_cache_size": 4294967295, 00:17:10.570 "dif_insert_or_strip": false, 00:17:10.570 "zcopy": false, 00:17:10.570 "c2h_success": false, 00:17:10.570 "sock_priority": 0, 00:17:10.570 "abort_timeout_sec": 1, 00:17:10.570 "ack_timeout": 0, 00:17:10.570 "data_wr_pool_size": 0 00:17:10.570 } 00:17:10.570 }, 00:17:10.570 { 00:17:10.570 "method": "nvmf_create_subsystem", 00:17:10.570 "params": { 00:17:10.570 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.570 "allow_any_host": false, 00:17:10.570 "serial_number": "00000000000000000000", 00:17:10.570 "model_number": "SPDK bdev Controller", 00:17:10.570 "max_namespaces": 32, 00:17:10.570 "min_cntlid": 1, 00:17:10.570 "max_cntlid": 65519, 00:17:10.570 "ana_reporting": false 00:17:10.570 } 00:17:10.570 }, 00:17:10.570 { 00:17:10.570 "method": "nvmf_subsystem_add_host", 00:17:10.570 "params": { 00:17:10.570 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.570 "host": "nqn.2016-06.io.spdk:host1", 00:17:10.570 "psk": "key0" 00:17:10.570 } 00:17:10.570 }, 00:17:10.570 { 00:17:10.570 "method": "nvmf_subsystem_add_ns", 00:17:10.570 "params": { 00:17:10.570 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.570 "namespace": { 00:17:10.570 "nsid": 1, 00:17:10.570 "bdev_name": "malloc0", 00:17:10.570 "nguid": "4353696A45D94FBF9C7FE40C57DF2F65", 00:17:10.570 "uuid": "4353696a-45d9-4fbf-9c7f-e40c57df2f65", 00:17:10.570 "no_auto_visible": false 00:17:10.570 } 00:17:10.570 } 00:17:10.570 }, 00:17:10.570 { 00:17:10.570 "method": "nvmf_subsystem_add_listener", 00:17:10.570 "params": { 00:17:10.570 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.570 "listen_address": { 00:17:10.570 "trtype": "TCP", 00:17:10.570 "adrfam": "IPv4", 00:17:10.570 "traddr": "10.0.0.2", 00:17:10.570 "trsvcid": "4420" 00:17:10.570 }, 00:17:10.570 "secure_channel": false, 00:17:10.570 "sock_impl": "ssl" 00:17:10.570 } 00:17:10.570 } 00:17:10.570 ] 00:17:10.570 } 00:17:10.570 ] 00:17:10.570 }' 00:17:10.570 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:10.829 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:17:10.829 "subsystems": [ 00:17:10.829 { 00:17:10.829 "subsystem": "keyring", 00:17:10.829 "config": [ 00:17:10.829 { 00:17:10.829 "method": "keyring_file_add_key", 00:17:10.829 "params": { 00:17:10.829 "name": "key0", 00:17:10.829 "path": "/tmp/tmp.KSrNGHk957" 00:17:10.829 } 00:17:10.829 } 00:17:10.829 ] 00:17:10.829 }, 00:17:10.829 { 00:17:10.829 "subsystem": "iobuf", 00:17:10.829 "config": [ 00:17:10.829 { 00:17:10.829 "method": "iobuf_set_options", 00:17:10.829 "params": { 00:17:10.829 "small_pool_count": 8192, 00:17:10.829 "large_pool_count": 1024, 00:17:10.829 "small_bufsize": 8192, 00:17:10.829 "large_bufsize": 135168, 00:17:10.829 "enable_numa": false 00:17:10.829 } 00:17:10.829 } 00:17:10.829 ] 00:17:10.829 }, 00:17:10.829 { 00:17:10.829 "subsystem": "sock", 00:17:10.829 "config": [ 00:17:10.829 { 00:17:10.829 "method": "sock_set_default_impl", 00:17:10.829 "params": { 00:17:10.829 "impl_name": "posix" 00:17:10.829 } 00:17:10.829 }, 00:17:10.829 { 00:17:10.829 "method": "sock_impl_set_options", 00:17:10.829 "params": { 00:17:10.829 "impl_name": "ssl", 00:17:10.829 "recv_buf_size": 4096, 00:17:10.829 "send_buf_size": 4096, 00:17:10.829 "enable_recv_pipe": true, 00:17:10.829 "enable_quickack": false, 00:17:10.829 "enable_placement_id": 0, 00:17:10.829 "enable_zerocopy_send_server": true, 00:17:10.829 "enable_zerocopy_send_client": false, 00:17:10.829 "zerocopy_threshold": 0, 00:17:10.829 "tls_version": 0, 00:17:10.829 "enable_ktls": false 00:17:10.829 } 00:17:10.829 }, 00:17:10.829 { 00:17:10.829 "method": "sock_impl_set_options", 00:17:10.829 "params": { 00:17:10.829 "impl_name": "posix", 00:17:10.829 "recv_buf_size": 2097152, 00:17:10.829 "send_buf_size": 2097152, 00:17:10.829 "enable_recv_pipe": true, 00:17:10.829 "enable_quickack": false, 00:17:10.829 "enable_placement_id": 0, 00:17:10.829 "enable_zerocopy_send_server": true, 00:17:10.829 "enable_zerocopy_send_client": false, 00:17:10.829 "zerocopy_threshold": 0, 00:17:10.829 "tls_version": 0, 00:17:10.829 "enable_ktls": false 00:17:10.829 } 00:17:10.829 } 00:17:10.829 ] 00:17:10.829 }, 00:17:10.829 { 00:17:10.829 "subsystem": "vmd", 00:17:10.829 "config": [] 00:17:10.829 }, 00:17:10.829 { 00:17:10.829 "subsystem": "accel", 00:17:10.829 "config": [ 00:17:10.829 { 00:17:10.829 "method": "accel_set_options", 00:17:10.829 "params": { 00:17:10.829 "small_cache_size": 128, 00:17:10.829 "large_cache_size": 16, 00:17:10.829 "task_count": 2048, 00:17:10.829 "sequence_count": 2048, 00:17:10.829 "buf_count": 2048 00:17:10.829 } 00:17:10.829 } 00:17:10.829 ] 00:17:10.830 }, 00:17:10.830 { 00:17:10.830 "subsystem": "bdev", 00:17:10.830 "config": [ 00:17:10.830 { 00:17:10.830 "method": "bdev_set_options", 00:17:10.830 "params": { 00:17:10.830 "bdev_io_pool_size": 65535, 00:17:10.830 "bdev_io_cache_size": 256, 00:17:10.830 "bdev_auto_examine": true, 00:17:10.830 "iobuf_small_cache_size": 128, 00:17:10.830 "iobuf_large_cache_size": 16 00:17:10.830 } 00:17:10.830 }, 00:17:10.830 { 00:17:10.830 "method": "bdev_raid_set_options", 00:17:10.830 "params": { 00:17:10.830 "process_window_size_kb": 1024, 00:17:10.830 "process_max_bandwidth_mb_sec": 0 00:17:10.830 } 00:17:10.830 }, 00:17:10.830 { 00:17:10.830 "method": "bdev_iscsi_set_options", 00:17:10.830 "params": { 00:17:10.830 "timeout_sec": 30 00:17:10.830 } 00:17:10.830 }, 00:17:10.830 { 00:17:10.830 "method": "bdev_nvme_set_options", 00:17:10.830 "params": { 00:17:10.830 "action_on_timeout": "none", 00:17:10.830 "timeout_us": 0, 00:17:10.830 "timeout_admin_us": 0, 00:17:10.830 "keep_alive_timeout_ms": 10000, 00:17:10.830 "arbitration_burst": 0, 00:17:10.830 "low_priority_weight": 0, 00:17:10.830 "medium_priority_weight": 0, 00:17:10.830 "high_priority_weight": 0, 00:17:10.830 "nvme_adminq_poll_period_us": 10000, 00:17:10.830 "nvme_ioq_poll_period_us": 0, 00:17:10.830 "io_queue_requests": 512, 00:17:10.830 "delay_cmd_submit": true, 00:17:10.830 "transport_retry_count": 4, 00:17:10.830 "bdev_retry_count": 3, 00:17:10.830 "transport_ack_timeout": 0, 00:17:10.830 "ctrlr_loss_timeout_sec": 0, 00:17:10.830 "reconnect_delay_sec": 0, 00:17:10.830 "fast_io_fail_timeout_sec": 0, 00:17:10.830 "disable_auto_failback": false, 00:17:10.830 "generate_uuids": false, 00:17:10.830 "transport_tos": 0, 00:17:10.830 "nvme_error_stat": false, 00:17:10.830 "rdma_srq_size": 0, 00:17:10.830 "io_path_stat": false, 00:17:10.830 "allow_accel_sequence": false, 00:17:10.830 "rdma_max_cq_size": 0, 00:17:10.830 "rdma_cm_event_timeout_ms": 0, 00:17:10.830 "dhchap_digests": [ 00:17:10.830 "sha256", 00:17:10.830 "sha384", 00:17:10.830 "sha512" 00:17:10.830 ], 00:17:10.830 "dhchap_dhgroups": [ 00:17:10.830 "null", 00:17:10.830 "ffdhe2048", 00:17:10.830 "ffdhe3072", 00:17:10.830 "ffdhe4096", 00:17:10.830 "ffdhe6144", 00:17:10.830 "ffdhe8192" 00:17:10.830 ] 00:17:10.830 } 00:17:10.830 }, 00:17:10.830 { 00:17:10.830 "method": "bdev_nvme_attach_controller", 00:17:10.830 "params": { 00:17:10.830 "name": "nvme0", 00:17:10.830 "trtype": "TCP", 00:17:10.830 "adrfam": "IPv4", 00:17:10.830 "traddr": "10.0.0.2", 00:17:10.830 "trsvcid": "4420", 00:17:10.830 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.830 "prchk_reftag": false, 00:17:10.830 "prchk_guard": false, 00:17:10.830 "ctrlr_loss_timeout_sec": 0, 00:17:10.830 "reconnect_delay_sec": 0, 00:17:10.830 "fast_io_fail_timeout_sec": 0, 00:17:10.830 "psk": "key0", 00:17:10.830 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:10.830 "hdgst": false, 00:17:10.830 "ddgst": false, 00:17:10.830 "multipath": "multipath" 00:17:10.830 } 00:17:10.830 }, 00:17:10.830 { 00:17:10.830 "method": "bdev_nvme_set_hotplug", 00:17:10.830 "params": { 00:17:10.830 "period_us": 100000, 00:17:10.830 "enable": false 00:17:10.830 } 00:17:10.830 }, 00:17:10.830 { 00:17:10.830 "method": "bdev_enable_histogram", 00:17:10.830 "params": { 00:17:10.830 "name": "nvme0n1", 00:17:10.830 "enable": true 00:17:10.830 } 00:17:10.830 }, 00:17:10.830 { 00:17:10.830 "method": "bdev_wait_for_examine" 00:17:10.830 } 00:17:10.830 ] 00:17:10.830 }, 00:17:10.830 { 00:17:10.830 "subsystem": "nbd", 00:17:10.830 "config": [] 00:17:10.830 } 00:17:10.830 ] 00:17:10.830 }' 00:17:10.830 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 875972 00:17:10.830 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 875972 ']' 00:17:10.830 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 875972 00:17:10.830 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:17:10.830 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:10.830 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 875972 00:17:10.830 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:10.830 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:10.830 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 875972' 00:17:10.830 killing process with pid 875972 00:17:10.830 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 875972 00:17:10.830 Received shutdown signal, test time was about 1.000000 seconds 00:17:10.830 00:17:10.830 Latency(us) 00:17:10.830 [2024-11-06T13:00:50.114Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.830 [2024-11-06T13:00:50.114Z] =================================================================================================================== 00:17:10.830 [2024-11-06T13:00:50.114Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:10.830 14:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 875972 00:17:10.830 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 875890 00:17:10.830 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 875890 ']' 00:17:10.830 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 875890 00:17:10.830 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:17:10.830 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:10.830 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 875890 00:17:10.830 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:10.830 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:10.830 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 875890' 00:17:10.830 killing process with pid 875890 00:17:10.830 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 875890 00:17:10.830 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 875890 00:17:11.090 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:17:11.090 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:11.090 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:11.090 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:11.090 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:17:11.090 "subsystems": [ 00:17:11.090 { 00:17:11.090 "subsystem": "keyring", 00:17:11.090 "config": [ 00:17:11.090 { 00:17:11.090 "method": "keyring_file_add_key", 00:17:11.090 "params": { 00:17:11.090 "name": "key0", 00:17:11.090 "path": "/tmp/tmp.KSrNGHk957" 00:17:11.090 } 00:17:11.090 } 00:17:11.090 ] 00:17:11.090 }, 00:17:11.090 { 00:17:11.090 "subsystem": "iobuf", 00:17:11.090 "config": [ 00:17:11.090 { 00:17:11.090 "method": "iobuf_set_options", 00:17:11.090 "params": { 00:17:11.090 "small_pool_count": 8192, 00:17:11.090 "large_pool_count": 1024, 00:17:11.090 "small_bufsize": 8192, 00:17:11.090 "large_bufsize": 135168, 00:17:11.090 "enable_numa": false 00:17:11.090 } 00:17:11.090 } 00:17:11.090 ] 00:17:11.090 }, 00:17:11.090 { 00:17:11.090 "subsystem": "sock", 00:17:11.090 "config": [ 00:17:11.090 { 00:17:11.090 "method": "sock_set_default_impl", 00:17:11.090 "params": { 00:17:11.090 "impl_name": "posix" 00:17:11.090 } 00:17:11.090 }, 00:17:11.090 { 00:17:11.090 "method": "sock_impl_set_options", 00:17:11.090 "params": { 00:17:11.090 "impl_name": "ssl", 00:17:11.090 "recv_buf_size": 4096, 00:17:11.090 "send_buf_size": 4096, 00:17:11.090 "enable_recv_pipe": true, 00:17:11.090 "enable_quickack": false, 00:17:11.090 "enable_placement_id": 0, 00:17:11.090 "enable_zerocopy_send_server": true, 00:17:11.090 "enable_zerocopy_send_client": false, 00:17:11.090 "zerocopy_threshold": 0, 00:17:11.090 "tls_version": 0, 00:17:11.090 "enable_ktls": false 00:17:11.090 } 00:17:11.090 }, 00:17:11.090 { 00:17:11.090 "method": "sock_impl_set_options", 00:17:11.090 "params": { 00:17:11.090 "impl_name": "posix", 00:17:11.090 "recv_buf_size": 2097152, 00:17:11.090 "send_buf_size": 2097152, 00:17:11.090 "enable_recv_pipe": true, 00:17:11.090 "enable_quickack": false, 00:17:11.090 "enable_placement_id": 0, 00:17:11.090 "enable_zerocopy_send_server": true, 00:17:11.090 "enable_zerocopy_send_client": false, 00:17:11.090 "zerocopy_threshold": 0, 00:17:11.090 "tls_version": 0, 00:17:11.090 "enable_ktls": false 00:17:11.090 } 00:17:11.090 } 00:17:11.090 ] 00:17:11.090 }, 00:17:11.090 { 00:17:11.090 "subsystem": "vmd", 00:17:11.090 "config": [] 00:17:11.090 }, 00:17:11.090 { 00:17:11.090 "subsystem": "accel", 00:17:11.090 "config": [ 00:17:11.090 { 00:17:11.090 "method": "accel_set_options", 00:17:11.090 "params": { 00:17:11.090 "small_cache_size": 128, 00:17:11.090 "large_cache_size": 16, 00:17:11.090 "task_count": 2048, 00:17:11.090 "sequence_count": 2048, 00:17:11.090 "buf_count": 2048 00:17:11.090 } 00:17:11.090 } 00:17:11.090 ] 00:17:11.090 }, 00:17:11.090 { 00:17:11.090 "subsystem": "bdev", 00:17:11.090 "config": [ 00:17:11.090 { 00:17:11.090 "method": "bdev_set_options", 00:17:11.090 "params": { 00:17:11.090 "bdev_io_pool_size": 65535, 00:17:11.090 "bdev_io_cache_size": 256, 00:17:11.090 "bdev_auto_examine": true, 00:17:11.090 "iobuf_small_cache_size": 128, 00:17:11.090 "iobuf_large_cache_size": 16 00:17:11.090 } 00:17:11.090 }, 00:17:11.090 { 00:17:11.090 "method": "bdev_raid_set_options", 00:17:11.090 "params": { 00:17:11.090 "process_window_size_kb": 1024, 00:17:11.090 "process_max_bandwidth_mb_sec": 0 00:17:11.090 } 00:17:11.090 }, 00:17:11.090 { 00:17:11.090 "method": "bdev_iscsi_set_options", 00:17:11.090 "params": { 00:17:11.090 "timeout_sec": 30 00:17:11.090 } 00:17:11.090 }, 00:17:11.090 { 00:17:11.090 "method": "bdev_nvme_set_options", 00:17:11.090 "params": { 00:17:11.090 "action_on_timeout": "none", 00:17:11.090 "timeout_us": 0, 00:17:11.090 "timeout_admin_us": 0, 00:17:11.090 "keep_alive_timeout_ms": 10000, 00:17:11.090 "arbitration_burst": 0, 00:17:11.090 "low_priority_weight": 0, 00:17:11.090 "medium_priority_weight": 0, 00:17:11.090 "high_priority_weight": 0, 00:17:11.090 "nvme_adminq_poll_period_us": 10000, 00:17:11.090 "nvme_ioq_poll_period_us": 0, 00:17:11.090 "io_queue_requests": 0, 00:17:11.090 "delay_cmd_submit": true, 00:17:11.090 "transport_retry_count": 4, 00:17:11.090 "bdev_retry_count": 3, 00:17:11.090 "transport_ack_timeout": 0, 00:17:11.090 "ctrlr_loss_timeout_sec": 0, 00:17:11.090 "reconnect_delay_sec": 0, 00:17:11.090 "fast_io_fail_timeout_sec": 0, 00:17:11.090 "disable_auto_failback": false, 00:17:11.090 "generate_uuids": false, 00:17:11.090 "transport_tos": 0, 00:17:11.090 "nvme_error_stat": false, 00:17:11.090 "rdma_srq_size": 0, 00:17:11.090 "io_path_stat": false, 00:17:11.090 "allow_accel_sequence": false, 00:17:11.090 "rdma_max_cq_size": 0, 00:17:11.090 "rdma_cm_event_timeout_ms": 0, 00:17:11.090 "dhchap_digests": [ 00:17:11.090 "sha256", 00:17:11.090 "sha384", 00:17:11.090 "sha512" 00:17:11.090 ], 00:17:11.090 "dhchap_dhgroups": [ 00:17:11.090 "null", 00:17:11.090 "ffdhe2048", 00:17:11.090 "ffdhe3072", 00:17:11.090 "ffdhe4096", 00:17:11.090 "ffdhe6144", 00:17:11.090 "ffdhe8192" 00:17:11.090 ] 00:17:11.090 } 00:17:11.090 }, 00:17:11.090 { 00:17:11.090 "method": "bdev_nvme_set_hotplug", 00:17:11.090 "params": { 00:17:11.090 "period_us": 100000, 00:17:11.090 "enable": false 00:17:11.090 } 00:17:11.090 }, 00:17:11.090 { 00:17:11.090 "method": "bdev_malloc_create", 00:17:11.090 "params": { 00:17:11.090 "name": "malloc0", 00:17:11.090 "num_blocks": 8192, 00:17:11.090 "block_size": 4096, 00:17:11.090 "physical_block_size": 4096, 00:17:11.090 "uuid": "4353696a-45d9-4fbf-9c7f-e40c57df2f65", 00:17:11.090 "optimal_io_boundary": 0, 00:17:11.090 "md_size": 0, 00:17:11.090 "dif_type": 0, 00:17:11.090 "dif_is_head_of_md": false, 00:17:11.090 "dif_pi_format": 0 00:17:11.090 } 00:17:11.090 }, 00:17:11.090 { 00:17:11.090 "method": "bdev_wait_for_examine" 00:17:11.090 } 00:17:11.090 ] 00:17:11.090 }, 00:17:11.090 { 00:17:11.090 "subsystem": "nbd", 00:17:11.090 "config": [] 00:17:11.090 }, 00:17:11.090 { 00:17:11.090 "subsystem": "scheduler", 00:17:11.090 "config": [ 00:17:11.090 { 00:17:11.090 "method": "framework_set_scheduler", 00:17:11.090 "params": { 00:17:11.090 "name": "static" 00:17:11.090 } 00:17:11.090 } 00:17:11.090 ] 00:17:11.090 }, 00:17:11.090 { 00:17:11.090 "subsystem": "nvmf", 00:17:11.090 "config": [ 00:17:11.090 { 00:17:11.090 "method": "nvmf_set_config", 00:17:11.090 "params": { 00:17:11.090 "discovery_filter": "match_any", 00:17:11.090 "admin_cmd_passthru": { 00:17:11.090 "identify_ctrlr": false 00:17:11.090 }, 00:17:11.090 "dhchap_digests": [ 00:17:11.090 "sha256", 00:17:11.090 "sha384", 00:17:11.090 "sha512" 00:17:11.090 ], 00:17:11.090 "dhchap_dhgroups": [ 00:17:11.090 "null", 00:17:11.090 "ffdhe2048", 00:17:11.090 "ffdhe3072", 00:17:11.090 "ffdhe4096", 00:17:11.090 "ffdhe6144", 00:17:11.090 "ffdhe8192" 00:17:11.090 ] 00:17:11.090 } 00:17:11.090 }, 00:17:11.090 { 00:17:11.090 "method": "nvmf_set_max_subsystems", 00:17:11.090 "params": { 00:17:11.090 "max_subsystems": 1024 00:17:11.090 } 00:17:11.090 }, 00:17:11.090 { 00:17:11.090 "method": "nvmf_set_crdt", 00:17:11.090 "params": { 00:17:11.090 "crdt1": 0, 00:17:11.090 "crdt2": 0, 00:17:11.090 "crdt3": 0 00:17:11.090 } 00:17:11.090 }, 00:17:11.090 { 00:17:11.090 "method": "nvmf_create_transport", 00:17:11.090 "params": { 00:17:11.090 "trtype": "TCP", 00:17:11.090 "max_queue_depth": 128, 00:17:11.090 "max_io_qpairs_per_ctrlr": 127, 00:17:11.090 "in_capsule_data_size": 4096, 00:17:11.091 "max_io_size": 131072, 00:17:11.091 "io_unit_size": 131072, 00:17:11.091 "max_aq_depth": 128, 00:17:11.091 "num_shared_buffers": 511, 00:17:11.091 "buf_cache_size": 4294967295, 00:17:11.091 "dif_insert_or_strip": false, 00:17:11.091 "zcopy": false, 00:17:11.091 "c2h_success": false, 00:17:11.091 "sock_priority": 0, 00:17:11.091 "abort_timeout_sec": 1, 00:17:11.091 "ack_timeout": 0, 00:17:11.091 "data_wr_pool_size": 0 00:17:11.091 } 00:17:11.091 }, 00:17:11.091 { 00:17:11.091 "method": "nvmf_create_subsystem", 00:17:11.091 "params": { 00:17:11.091 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:11.091 "allow_any_host": false, 00:17:11.091 "serial_number": "00000000000000000000", 00:17:11.091 "model_number": "SPDK bdev Controller", 00:17:11.091 "max_namespaces": 32, 00:17:11.091 "min_cntlid": 1, 00:17:11.091 "max_cntlid": 65519, 00:17:11.091 "ana_reporting": false 00:17:11.091 } 00:17:11.091 }, 00:17:11.091 { 00:17:11.091 "method": "nvmf_subsystem_add_host", 00:17:11.091 "params": { 00:17:11.091 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:11.091 "host": "nqn.2016-06.io.spdk:host1", 00:17:11.091 "psk": "key0" 00:17:11.091 } 00:17:11.091 }, 00:17:11.091 { 00:17:11.091 "method": "nvmf_subsystem_add_ns", 00:17:11.091 "params": { 00:17:11.091 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:11.091 "namespace": { 00:17:11.091 "nsid": 1, 00:17:11.091 "bdev_name": "malloc0", 00:17:11.091 "nguid": "4353696A45D94FBF9C7FE40C57DF2F65", 00:17:11.091 "uuid": "4353696a-45d9-4fbf-9c7f-e40c57df2f65", 00:17:11.091 "no_auto_visible": false 00:17:11.091 } 00:17:11.091 } 00:17:11.091 }, 00:17:11.091 { 00:17:11.091 "method": "nvmf_subsystem_add_listener", 00:17:11.091 "params": { 00:17:11.091 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:11.091 "listen_address": { 00:17:11.091 "trtype": "TCP", 00:17:11.091 "adrfam": "IPv4", 00:17:11.091 "traddr": "10.0.0.2", 00:17:11.091 "trsvcid": "4420" 00:17:11.091 }, 00:17:11.091 "secure_channel": false, 00:17:11.091 "sock_impl": "ssl" 00:17:11.091 } 00:17:11.091 } 00:17:11.091 ] 00:17:11.091 } 00:17:11.091 ] 00:17:11.091 }' 00:17:11.091 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=876653 00:17:11.091 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 876653 00:17:11.091 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 876653 ']' 00:17:11.091 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.091 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:11.091 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.091 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:11.091 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:11.091 14:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:17:11.091 [2024-11-06 14:00:50.244334] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:17:11.091 [2024-11-06 14:00:50.244386] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.091 [2024-11-06 14:00:50.315388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.091 [2024-11-06 14:00:50.343058] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:11.091 [2024-11-06 14:00:50.343085] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:11.091 [2024-11-06 14:00:50.343092] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:11.091 [2024-11-06 14:00:50.343096] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:11.091 [2024-11-06 14:00:50.343101] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:11.091 [2024-11-06 14:00:50.343605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.350 [2024-11-06 14:00:50.538356] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:11.350 [2024-11-06 14:00:50.570381] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:11.350 [2024-11-06 14:00:50.570585] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:11.919 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:11.919 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:17:11.919 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:11.919 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:11.919 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:11.919 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:11.919 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=876679 00:17:11.919 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 876679 /var/tmp/bdevperf.sock 00:17:11.919 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 876679 ']' 00:17:11.919 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:11.919 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:11.919 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:11.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:11.919 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:11.919 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:11.919 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:17:11.919 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:17:11.919 "subsystems": [ 00:17:11.919 { 00:17:11.919 "subsystem": "keyring", 00:17:11.919 "config": [ 00:17:11.919 { 00:17:11.919 "method": "keyring_file_add_key", 00:17:11.919 "params": { 00:17:11.919 "name": "key0", 00:17:11.919 "path": "/tmp/tmp.KSrNGHk957" 00:17:11.919 } 00:17:11.919 } 00:17:11.919 ] 00:17:11.919 }, 00:17:11.919 { 00:17:11.919 "subsystem": "iobuf", 00:17:11.919 "config": [ 00:17:11.919 { 00:17:11.919 "method": "iobuf_set_options", 00:17:11.919 "params": { 00:17:11.919 "small_pool_count": 8192, 00:17:11.919 "large_pool_count": 1024, 00:17:11.919 "small_bufsize": 8192, 00:17:11.919 "large_bufsize": 135168, 00:17:11.919 "enable_numa": false 00:17:11.919 } 00:17:11.919 } 00:17:11.919 ] 00:17:11.919 }, 00:17:11.919 { 00:17:11.919 "subsystem": "sock", 00:17:11.919 "config": [ 00:17:11.919 { 00:17:11.919 "method": "sock_set_default_impl", 00:17:11.919 "params": { 00:17:11.919 "impl_name": "posix" 00:17:11.919 } 00:17:11.919 }, 00:17:11.919 { 00:17:11.919 "method": "sock_impl_set_options", 00:17:11.919 "params": { 00:17:11.919 "impl_name": "ssl", 00:17:11.919 "recv_buf_size": 4096, 00:17:11.919 "send_buf_size": 4096, 00:17:11.919 "enable_recv_pipe": true, 00:17:11.919 "enable_quickack": false, 00:17:11.919 "enable_placement_id": 0, 00:17:11.919 "enable_zerocopy_send_server": true, 00:17:11.919 "enable_zerocopy_send_client": false, 00:17:11.919 "zerocopy_threshold": 0, 00:17:11.919 "tls_version": 0, 00:17:11.919 "enable_ktls": false 00:17:11.919 } 00:17:11.919 }, 00:17:11.919 { 00:17:11.919 "method": "sock_impl_set_options", 00:17:11.919 "params": { 00:17:11.919 "impl_name": "posix", 00:17:11.919 "recv_buf_size": 2097152, 00:17:11.919 "send_buf_size": 2097152, 00:17:11.919 "enable_recv_pipe": true, 00:17:11.919 "enable_quickack": false, 00:17:11.919 "enable_placement_id": 0, 00:17:11.919 "enable_zerocopy_send_server": true, 00:17:11.920 "enable_zerocopy_send_client": false, 00:17:11.920 "zerocopy_threshold": 0, 00:17:11.920 "tls_version": 0, 00:17:11.920 "enable_ktls": false 00:17:11.920 } 00:17:11.920 } 00:17:11.920 ] 00:17:11.920 }, 00:17:11.920 { 00:17:11.920 "subsystem": "vmd", 00:17:11.920 "config": [] 00:17:11.920 }, 00:17:11.920 { 00:17:11.920 "subsystem": "accel", 00:17:11.920 "config": [ 00:17:11.920 { 00:17:11.920 "method": "accel_set_options", 00:17:11.920 "params": { 00:17:11.920 "small_cache_size": 128, 00:17:11.920 "large_cache_size": 16, 00:17:11.920 "task_count": 2048, 00:17:11.920 "sequence_count": 2048, 00:17:11.920 "buf_count": 2048 00:17:11.920 } 00:17:11.920 } 00:17:11.920 ] 00:17:11.920 }, 00:17:11.920 { 00:17:11.920 "subsystem": "bdev", 00:17:11.920 "config": [ 00:17:11.920 { 00:17:11.920 "method": "bdev_set_options", 00:17:11.920 "params": { 00:17:11.920 "bdev_io_pool_size": 65535, 00:17:11.920 "bdev_io_cache_size": 256, 00:17:11.920 "bdev_auto_examine": true, 00:17:11.920 "iobuf_small_cache_size": 128, 00:17:11.920 "iobuf_large_cache_size": 16 00:17:11.920 } 00:17:11.920 }, 00:17:11.920 { 00:17:11.920 "method": "bdev_raid_set_options", 00:17:11.920 "params": { 00:17:11.920 "process_window_size_kb": 1024, 00:17:11.920 "process_max_bandwidth_mb_sec": 0 00:17:11.920 } 00:17:11.920 }, 00:17:11.920 { 00:17:11.920 "method": "bdev_iscsi_set_options", 00:17:11.920 "params": { 00:17:11.920 "timeout_sec": 30 00:17:11.920 } 00:17:11.920 }, 00:17:11.920 { 00:17:11.920 "method": "bdev_nvme_set_options", 00:17:11.920 "params": { 00:17:11.920 "action_on_timeout": "none", 00:17:11.920 "timeout_us": 0, 00:17:11.920 "timeout_admin_us": 0, 00:17:11.920 "keep_alive_timeout_ms": 10000, 00:17:11.920 "arbitration_burst": 0, 00:17:11.920 "low_priority_weight": 0, 00:17:11.920 "medium_priority_weight": 0, 00:17:11.920 "high_priority_weight": 0, 00:17:11.920 "nvme_adminq_poll_period_us": 10000, 00:17:11.920 "nvme_ioq_poll_period_us": 0, 00:17:11.920 "io_queue_requests": 512, 00:17:11.920 "delay_cmd_submit": true, 00:17:11.920 "transport_retry_count": 4, 00:17:11.920 "bdev_retry_count": 3, 00:17:11.920 "transport_ack_timeout": 0, 00:17:11.920 "ctrlr_loss_timeout_sec": 0, 00:17:11.920 "reconnect_delay_sec": 0, 00:17:11.920 "fast_io_fail_timeout_sec": 0, 00:17:11.920 "disable_auto_failback": false, 00:17:11.920 "generate_uuids": false, 00:17:11.920 "transport_tos": 0, 00:17:11.920 "nvme_error_stat": false, 00:17:11.920 "rdma_srq_size": 0, 00:17:11.920 "io_path_stat": false, 00:17:11.920 "allow_accel_sequence": false, 00:17:11.920 "rdma_max_cq_size": 0, 00:17:11.920 "rdma_cm_event_timeout_ms": 0, 00:17:11.920 "dhchap_digests": [ 00:17:11.920 "sha256", 00:17:11.920 "sha384", 00:17:11.920 "sha512" 00:17:11.920 ], 00:17:11.920 "dhchap_dhgroups": [ 00:17:11.920 "null", 00:17:11.920 "ffdhe2048", 00:17:11.920 "ffdhe3072", 00:17:11.920 "ffdhe4096", 00:17:11.920 "ffdhe6144", 00:17:11.920 "ffdhe8192" 00:17:11.920 ] 00:17:11.920 } 00:17:11.920 }, 00:17:11.920 { 00:17:11.920 "method": "bdev_nvme_attach_controller", 00:17:11.920 "params": { 00:17:11.920 "name": "nvme0", 00:17:11.920 "trtype": "TCP", 00:17:11.920 "adrfam": "IPv4", 00:17:11.920 "traddr": "10.0.0.2", 00:17:11.920 "trsvcid": "4420", 00:17:11.920 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:11.920 "prchk_reftag": false, 00:17:11.920 "prchk_guard": false, 00:17:11.920 "ctrlr_loss_timeout_sec": 0, 00:17:11.920 "reconnect_delay_sec": 0, 00:17:11.920 "fast_io_fail_timeout_sec": 0, 00:17:11.920 "psk": "key0", 00:17:11.920 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:11.920 "hdgst": false, 00:17:11.920 "ddgst": false, 00:17:11.920 "multipath": "multipath" 00:17:11.920 } 00:17:11.920 }, 00:17:11.920 { 00:17:11.920 "method": "bdev_nvme_set_hotplug", 00:17:11.920 "params": { 00:17:11.920 "period_us": 100000, 00:17:11.920 "enable": false 00:17:11.920 } 00:17:11.920 }, 00:17:11.920 { 00:17:11.920 "method": "bdev_enable_histogram", 00:17:11.920 "params": { 00:17:11.920 "name": "nvme0n1", 00:17:11.920 "enable": true 00:17:11.920 } 00:17:11.920 }, 00:17:11.920 { 00:17:11.920 "method": "bdev_wait_for_examine" 00:17:11.920 } 00:17:11.920 ] 00:17:11.920 }, 00:17:11.920 { 00:17:11.920 "subsystem": "nbd", 00:17:11.920 "config": [] 00:17:11.920 } 00:17:11.920 ] 00:17:11.920 }' 00:17:11.920 [2024-11-06 14:00:51.071883] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:17:11.920 [2024-11-06 14:00:51.071932] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid876679 ] 00:17:11.920 [2024-11-06 14:00:51.137158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.920 [2024-11-06 14:00:51.166632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.180 [2024-11-06 14:00:51.302812] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:12.747 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:12.747 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:17:12.747 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:12.747 14:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:17:12.747 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.747 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:13.006 Running I/O for 1 seconds... 00:17:13.942 1796.00 IOPS, 7.02 MiB/s 00:17:13.942 Latency(us) 00:17:13.942 [2024-11-06T13:00:53.226Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.942 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:13.942 Verification LBA range: start 0x0 length 0x2000 00:17:13.942 nvme0n1 : 1.03 1863.21 7.28 0.00 0.00 67832.34 5679.79 173015.04 00:17:13.942 [2024-11-06T13:00:53.226Z] =================================================================================================================== 00:17:13.942 [2024-11-06T13:00:53.226Z] Total : 1863.21 7.28 0.00 0.00 67832.34 5679.79 173015.04 00:17:13.942 { 00:17:13.942 "results": [ 00:17:13.942 { 00:17:13.942 "job": "nvme0n1", 00:17:13.942 "core_mask": "0x2", 00:17:13.942 "workload": "verify", 00:17:13.942 "status": "finished", 00:17:13.942 "verify_range": { 00:17:13.942 "start": 0, 00:17:13.942 "length": 8192 00:17:13.942 }, 00:17:13.942 "queue_depth": 128, 00:17:13.942 "io_size": 4096, 00:17:13.942 "runtime": 1.033163, 00:17:13.942 "iops": 1863.210354997227, 00:17:13.942 "mibps": 7.2781654492079175, 00:17:13.942 "io_failed": 0, 00:17:13.942 "io_timeout": 0, 00:17:13.942 "avg_latency_us": 67832.34172121213, 00:17:13.942 "min_latency_us": 5679.786666666667, 00:17:13.942 "max_latency_us": 173015.04 00:17:13.942 } 00:17:13.942 ], 00:17:13.942 "core_count": 1 00:17:13.942 } 00:17:13.942 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:17:13.942 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:17:13.942 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:17:13.942 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:17:13.942 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:17:13.942 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:17:13.942 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:13.942 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:17:13.942 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:17:13.942 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:17:13.942 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:13.942 nvmf_trace.0 00:17:13.942 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:17:13.942 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 876679 00:17:13.942 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 876679 ']' 00:17:13.942 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 876679 00:17:13.942 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:17:13.942 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:13.942 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 876679 00:17:14.202 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:14.202 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:14.202 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 876679' 00:17:14.202 killing process with pid 876679 00:17:14.202 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 876679 00:17:14.202 Received shutdown signal, test time was about 1.000000 seconds 00:17:14.202 00:17:14.202 Latency(us) 00:17:14.202 [2024-11-06T13:00:53.486Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.202 [2024-11-06T13:00:53.486Z] =================================================================================================================== 00:17:14.202 [2024-11-06T13:00:53.486Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:14.202 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 876679 00:17:14.202 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:17:14.202 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:14.202 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:17:14.202 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:14.202 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:17:14.202 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:14.202 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:14.202 rmmod nvme_tcp 00:17:14.202 rmmod nvme_fabrics 00:17:14.202 rmmod nvme_keyring 00:17:14.202 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:14.202 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:17:14.202 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:17:14.202 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 876653 ']' 00:17:14.202 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 876653 00:17:14.202 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 876653 ']' 00:17:14.202 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 876653 00:17:14.202 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:17:14.202 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:14.202 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 876653 00:17:14.202 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:14.202 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:14.202 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 876653' 00:17:14.202 killing process with pid 876653 00:17:14.202 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 876653 00:17:14.202 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 876653 00:17:14.461 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:14.461 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:14.461 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:14.461 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:17:14.461 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:17:14.461 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:14.461 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:17:14.461 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:14.461 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:14.461 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.461 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:14.461 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.363 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:16.363 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.9kBc8QC6cn /tmp/tmp.vdvzc4pgID /tmp/tmp.KSrNGHk957 00:17:16.363 00:17:16.363 real 1m16.727s 00:17:16.363 user 2m5.266s 00:17:16.363 sys 0m18.927s 00:17:16.363 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:16.363 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:16.363 ************************************ 00:17:16.363 END TEST nvmf_tls 00:17:16.363 ************************************ 00:17:16.363 14:00:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:16.363 14:00:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:16.363 14:00:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:16.363 14:00:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:16.363 ************************************ 00:17:16.363 START TEST nvmf_fips 00:17:16.363 ************************************ 00:17:16.363 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:16.621 * Looking for test storage... 00:17:16.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:17:16.621 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:16.621 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:17:16.621 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:16.621 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:16.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.622 --rc genhtml_branch_coverage=1 00:17:16.622 --rc genhtml_function_coverage=1 00:17:16.622 --rc genhtml_legend=1 00:17:16.622 --rc geninfo_all_blocks=1 00:17:16.622 --rc geninfo_unexecuted_blocks=1 00:17:16.622 00:17:16.622 ' 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:16.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.622 --rc genhtml_branch_coverage=1 00:17:16.622 --rc genhtml_function_coverage=1 00:17:16.622 --rc genhtml_legend=1 00:17:16.622 --rc geninfo_all_blocks=1 00:17:16.622 --rc geninfo_unexecuted_blocks=1 00:17:16.622 00:17:16.622 ' 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:16.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.622 --rc genhtml_branch_coverage=1 00:17:16.622 --rc genhtml_function_coverage=1 00:17:16.622 --rc genhtml_legend=1 00:17:16.622 --rc geninfo_all_blocks=1 00:17:16.622 --rc geninfo_unexecuted_blocks=1 00:17:16.622 00:17:16.622 ' 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:16.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.622 --rc genhtml_branch_coverage=1 00:17:16.622 --rc genhtml_function_coverage=1 00:17:16.622 --rc genhtml_legend=1 00:17:16.622 --rc geninfo_all_blocks=1 00:17:16.622 --rc geninfo_unexecuted_blocks=1 00:17:16.622 00:17:16.622 ' 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:16.622 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:17:16.622 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:17:16.623 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:17:16.882 Error setting digest 00:17:16.882 40E274BD8B7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:17:16.882 40E274BD8B7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:17:16.882 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:17:16.882 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:16.882 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:16.882 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:16.882 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:17:16.882 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:16.882 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:16.882 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:16.882 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:16.882 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:16.882 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.882 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:16.882 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.882 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:16.882 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:16.882 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:17:16.882 14:00:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:22.154 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:22.154 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:22.154 Found net devices under 0000:31:00.0: cvl_0_0 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:22.154 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:22.155 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:22.155 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:22.155 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:22.155 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:22.155 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:22.155 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:22.155 Found net devices under 0000:31:00.1: cvl_0_1 00:17:22.155 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:22.155 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:22.155 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:17:22.155 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:22.155 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:22.155 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:22.155 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:22.155 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:22.155 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:22.155 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:22.155 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:22.155 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:22.155 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:22.155 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:22.155 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:22.155 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:22.155 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:22.155 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:22.155 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:22.155 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:22.155 14:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:22.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:22.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.410 ms 00:17:22.155 00:17:22.155 --- 10.0.0.2 ping statistics --- 00:17:22.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.155 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:22.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:22.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:17:22.155 00:17:22.155 --- 10.0.0.1 ping statistics --- 00:17:22.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.155 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=881715 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 881715 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 881715 ']' 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:22.155 [2024-11-06 14:01:01.237352] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:17:22.155 [2024-11-06 14:01:01.237393] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.155 [2024-11-06 14:01:01.298332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.155 [2024-11-06 14:01:01.326385] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.155 [2024-11-06 14:01:01.326411] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.155 [2024-11-06 14:01:01.326416] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:22.155 [2024-11-06 14:01:01.326421] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:22.155 [2024-11-06 14:01:01.326425] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.155 [2024-11-06 14:01:01.326850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.O8L 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.O8L 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.O8L 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.O8L 00:17:22.155 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:22.416 [2024-11-06 14:01:01.574157] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:22.416 [2024-11-06 14:01:01.590164] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:22.416 [2024-11-06 14:01:01.590347] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:22.416 malloc0 00:17:22.416 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:22.416 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=881747 00:17:22.416 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 881747 /var/tmp/bdevperf.sock 00:17:22.416 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 881747 ']' 00:17:22.416 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:22.416 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:22.416 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:22.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:22.416 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:22.416 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:22.416 14:01:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:22.416 [2024-11-06 14:01:01.691140] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:17:22.416 [2024-11-06 14:01:01.691194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid881747 ] 00:17:22.675 [2024-11-06 14:01:01.768549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.675 [2024-11-06 14:01:01.803566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:23.244 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:23.244 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:17:23.244 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.O8L 00:17:23.502 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:23.502 [2024-11-06 14:01:02.756206] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:23.761 TLSTESTn1 00:17:23.761 14:01:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:23.761 Running I/O for 10 seconds... 00:17:26.076 1760.00 IOPS, 6.88 MiB/s [2024-11-06T13:01:06.300Z] 2016.50 IOPS, 7.88 MiB/s [2024-11-06T13:01:07.238Z] 2755.00 IOPS, 10.76 MiB/s [2024-11-06T13:01:08.177Z] 2921.00 IOPS, 11.41 MiB/s [2024-11-06T13:01:09.115Z] 3198.80 IOPS, 12.50 MiB/s [2024-11-06T13:01:10.054Z] 3289.50 IOPS, 12.85 MiB/s [2024-11-06T13:01:10.992Z] 3459.29 IOPS, 13.51 MiB/s [2024-11-06T13:01:12.371Z] 3403.75 IOPS, 13.30 MiB/s [2024-11-06T13:01:13.308Z] 3393.00 IOPS, 13.25 MiB/s [2024-11-06T13:01:13.308Z] 3491.80 IOPS, 13.64 MiB/s 00:17:34.024 Latency(us) 00:17:34.024 [2024-11-06T13:01:13.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:34.024 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:34.024 Verification LBA range: start 0x0 length 0x2000 00:17:34.024 TLSTESTn1 : 10.03 3494.09 13.65 0.00 0.00 36576.21 4478.29 166898.35 00:17:34.024 [2024-11-06T13:01:13.308Z] =================================================================================================================== 00:17:34.024 [2024-11-06T13:01:13.308Z] Total : 3494.09 13.65 0.00 0.00 36576.21 4478.29 166898.35 00:17:34.024 { 00:17:34.024 "results": [ 00:17:34.024 { 00:17:34.024 "job": "TLSTESTn1", 00:17:34.024 "core_mask": "0x4", 00:17:34.024 "workload": "verify", 00:17:34.024 "status": "finished", 00:17:34.024 "verify_range": { 00:17:34.024 "start": 0, 00:17:34.024 "length": 8192 00:17:34.024 }, 00:17:34.024 "queue_depth": 128, 00:17:34.024 "io_size": 4096, 00:17:34.024 "runtime": 10.030087, 00:17:34.024 "iops": 3494.0873394218816, 00:17:34.024 "mibps": 13.648778669616725, 00:17:34.024 "io_failed": 0, 00:17:34.024 "io_timeout": 0, 00:17:34.024 "avg_latency_us": 36576.214879491716, 00:17:34.024 "min_latency_us": 4478.293333333333, 00:17:34.024 "max_latency_us": 166898.34666666668 00:17:34.024 } 00:17:34.024 ], 00:17:34.024 "core_count": 1 00:17:34.024 } 00:17:34.024 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:17:34.024 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:17:34.024 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:17:34.024 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:17:34.024 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:17:34.024 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:34.024 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:17:34.024 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:17:34.024 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:17:34.024 14:01:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:34.024 nvmf_trace.0 00:17:34.024 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:17:34.024 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 881747 00:17:34.024 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 881747 ']' 00:17:34.024 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 881747 00:17:34.024 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:17:34.024 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:34.024 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 881747 00:17:34.024 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:17:34.024 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:17:34.024 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 881747' 00:17:34.024 killing process with pid 881747 00:17:34.024 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 881747 00:17:34.024 Received shutdown signal, test time was about 10.000000 seconds 00:17:34.024 00:17:34.024 Latency(us) 00:17:34.024 [2024-11-06T13:01:13.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:34.024 [2024-11-06T13:01:13.308Z] =================================================================================================================== 00:17:34.024 [2024-11-06T13:01:13.308Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:34.024 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 881747 00:17:34.024 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:17:34.024 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:34.024 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:17:34.024 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:34.024 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:17:34.024 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:34.024 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:34.024 rmmod nvme_tcp 00:17:34.024 rmmod nvme_fabrics 00:17:34.024 rmmod nvme_keyring 00:17:34.024 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:34.024 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:17:34.024 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:17:34.024 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 881715 ']' 00:17:34.024 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 881715 00:17:34.024 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 881715 ']' 00:17:34.024 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 881715 00:17:34.024 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:17:34.024 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:34.024 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 881715 00:17:34.284 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:34.284 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:34.284 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 881715' 00:17:34.284 killing process with pid 881715 00:17:34.284 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 881715 00:17:34.284 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 881715 00:17:34.284 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:34.284 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:34.284 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:34.284 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:17:34.284 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:17:34.284 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:17:34.284 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:34.284 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:34.284 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:34.284 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.284 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:34.284 14:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.297 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:36.297 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.O8L 00:17:36.297 00:17:36.297 real 0m19.836s 00:17:36.297 user 0m24.205s 00:17:36.297 sys 0m6.208s 00:17:36.297 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:36.297 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:36.297 ************************************ 00:17:36.297 END TEST nvmf_fips 00:17:36.297 ************************************ 00:17:36.297 14:01:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:17:36.297 14:01:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:36.297 14:01:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:36.297 14:01:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:36.297 ************************************ 00:17:36.297 START TEST nvmf_control_msg_list 00:17:36.297 ************************************ 00:17:36.297 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:17:36.297 * Looking for test storage... 00:17:36.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:36.297 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:36.557 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:17:36.557 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:36.557 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:36.557 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:36.557 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:36.557 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:36.557 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:17:36.557 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:17:36.557 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:17:36.557 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:17:36.557 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:17:36.557 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:17:36.557 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:17:36.557 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:36.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.558 --rc genhtml_branch_coverage=1 00:17:36.558 --rc genhtml_function_coverage=1 00:17:36.558 --rc genhtml_legend=1 00:17:36.558 --rc geninfo_all_blocks=1 00:17:36.558 --rc geninfo_unexecuted_blocks=1 00:17:36.558 00:17:36.558 ' 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:36.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.558 --rc genhtml_branch_coverage=1 00:17:36.558 --rc genhtml_function_coverage=1 00:17:36.558 --rc genhtml_legend=1 00:17:36.558 --rc geninfo_all_blocks=1 00:17:36.558 --rc geninfo_unexecuted_blocks=1 00:17:36.558 00:17:36.558 ' 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:36.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.558 --rc genhtml_branch_coverage=1 00:17:36.558 --rc genhtml_function_coverage=1 00:17:36.558 --rc genhtml_legend=1 00:17:36.558 --rc geninfo_all_blocks=1 00:17:36.558 --rc geninfo_unexecuted_blocks=1 00:17:36.558 00:17:36.558 ' 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:36.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.558 --rc genhtml_branch_coverage=1 00:17:36.558 --rc genhtml_function_coverage=1 00:17:36.558 --rc genhtml_legend=1 00:17:36.558 --rc geninfo_all_blocks=1 00:17:36.558 --rc geninfo_unexecuted_blocks=1 00:17:36.558 00:17:36.558 ' 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:36.558 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:17:36.558 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:36.559 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:36.559 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:36.559 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:36.559 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:36.559 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.559 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:36.559 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.559 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:36.559 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:36.559 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:17:36.559 14:01:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:41.832 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:41.832 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:41.832 Found net devices under 0000:31:00.0: cvl_0_0 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:41.832 Found net devices under 0000:31:00.1: cvl_0_1 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:41.832 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:41.832 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:41.832 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:41.832 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:41.832 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:41.832 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:41.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:17:41.833 00:17:41.833 --- 10.0.0.2 ping statistics --- 00:17:41.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.833 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:17:41.833 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:41.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:41.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:17:41.833 00:17:41.833 --- 10.0.0.1 ping statistics --- 00:17:41.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.833 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:17:41.833 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:41.833 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:17:41.833 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:41.833 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:41.833 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:41.833 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:41.833 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:41.833 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:41.833 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:41.833 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:17:41.833 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:41.833 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:41.833 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:41.833 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=888774 00:17:41.833 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 888774 00:17:41.833 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 888774 ']' 00:17:41.833 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.833 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:41.833 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.833 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:41.833 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:41.833 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:41.833 [2024-11-06 14:01:21.101756] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:17:41.833 [2024-11-06 14:01:21.101819] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.092 [2024-11-06 14:01:21.189537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.092 [2024-11-06 14:01:21.225367] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:42.092 [2024-11-06 14:01:21.225412] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:42.092 [2024-11-06 14:01:21.225422] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:42.092 [2024-11-06 14:01:21.225430] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:42.092 [2024-11-06 14:01:21.225437] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:42.092 [2024-11-06 14:01:21.226003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.659 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:42.659 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:17:42.659 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:42.659 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:42.659 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:42.659 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:42.659 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:17:42.659 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:17:42.659 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:17:42.659 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.659 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:42.659 [2024-11-06 14:01:21.911881] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:42.659 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.659 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:17:42.659 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.659 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:42.659 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.659 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:17:42.659 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.659 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:42.659 Malloc0 00:17:42.659 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.659 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:17:42.659 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.659 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:42.917 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.917 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:42.917 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.917 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:42.917 [2024-11-06 14:01:21.946815] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:42.917 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.918 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=888805 00:17:42.918 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:42.918 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=888806 00:17:42.918 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:42.918 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=888807 00:17:42.918 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 888805 00:17:42.918 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:42.918 [2024-11-06 14:01:22.005539] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:42.918 [2024-11-06 14:01:22.005977] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:42.918 [2024-11-06 14:01:22.015284] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:43.855 Initializing NVMe Controllers 00:17:43.855 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:17:43.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:17:43.855 Initialization complete. Launching workers. 00:17:43.855 ======================================================== 00:17:43.855 Latency(us) 00:17:43.855 Device Information : IOPS MiB/s Average min max 00:17:43.855 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 1737.00 6.79 575.71 113.44 769.24 00:17:43.855 ======================================================== 00:17:43.855 Total : 1737.00 6.79 575.71 113.44 769.24 00:17:43.855 00:17:43.855 [2024-11-06 14:01:23.079229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b220 is same with the state(6) to be set 00:17:43.855 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 888806 00:17:43.855 Initializing NVMe Controllers 00:17:43.855 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:17:43.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:17:43.855 Initialization complete. Launching workers. 00:17:43.855 ======================================================== 00:17:43.855 Latency(us) 00:17:43.855 Device Information : IOPS MiB/s Average min max 00:17:43.855 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1647.00 6.43 606.92 301.93 890.92 00:17:43.855 ======================================================== 00:17:43.855 Total : 1647.00 6.43 606.92 301.93 890.92 00:17:43.855 00:17:44.114 Initializing NVMe Controllers 00:17:44.114 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:17:44.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:17:44.114 Initialization complete. Launching workers. 00:17:44.114 ======================================================== 00:17:44.114 Latency(us) 00:17:44.114 Device Information : IOPS MiB/s Average min max 00:17:44.114 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40919.31 40737.46 41253.23 00:17:44.114 ======================================================== 00:17:44.114 Total : 25.00 0.10 40919.31 40737.46 41253.23 00:17:44.114 00:17:44.114 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 888807 00:17:44.114 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:44.114 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:17:44.114 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:44.114 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:17:44.114 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:44.114 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:17:44.114 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:44.114 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:44.114 rmmod nvme_tcp 00:17:44.114 rmmod nvme_fabrics 00:17:44.114 rmmod nvme_keyring 00:17:44.115 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:44.115 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:17:44.115 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:17:44.115 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 888774 ']' 00:17:44.115 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 888774 00:17:44.115 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 888774 ']' 00:17:44.115 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 888774 00:17:44.115 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:17:44.115 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:44.115 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 888774 00:17:44.115 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:44.115 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:44.115 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 888774' 00:17:44.115 killing process with pid 888774 00:17:44.115 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 888774 00:17:44.115 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 888774 00:17:44.373 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:44.373 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:44.373 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:44.373 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:17:44.373 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:17:44.373 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:44.373 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:17:44.373 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:44.373 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:44.373 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.373 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:44.373 14:01:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.281 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:46.542 00:17:46.542 real 0m10.043s 00:17:46.542 user 0m7.228s 00:17:46.542 sys 0m4.685s 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:46.542 ************************************ 00:17:46.542 END TEST nvmf_control_msg_list 00:17:46.542 ************************************ 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:46.542 ************************************ 00:17:46.542 START TEST nvmf_wait_for_buf 00:17:46.542 ************************************ 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:17:46.542 * Looking for test storage... 00:17:46.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:46.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.542 --rc genhtml_branch_coverage=1 00:17:46.542 --rc genhtml_function_coverage=1 00:17:46.542 --rc genhtml_legend=1 00:17:46.542 --rc geninfo_all_blocks=1 00:17:46.542 --rc geninfo_unexecuted_blocks=1 00:17:46.542 00:17:46.542 ' 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:46.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.542 --rc genhtml_branch_coverage=1 00:17:46.542 --rc genhtml_function_coverage=1 00:17:46.542 --rc genhtml_legend=1 00:17:46.542 --rc geninfo_all_blocks=1 00:17:46.542 --rc geninfo_unexecuted_blocks=1 00:17:46.542 00:17:46.542 ' 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:46.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.542 --rc genhtml_branch_coverage=1 00:17:46.542 --rc genhtml_function_coverage=1 00:17:46.542 --rc genhtml_legend=1 00:17:46.542 --rc geninfo_all_blocks=1 00:17:46.542 --rc geninfo_unexecuted_blocks=1 00:17:46.542 00:17:46.542 ' 00:17:46.542 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:46.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.542 --rc genhtml_branch_coverage=1 00:17:46.542 --rc genhtml_function_coverage=1 00:17:46.542 --rc genhtml_legend=1 00:17:46.543 --rc geninfo_all_blocks=1 00:17:46.543 --rc geninfo_unexecuted_blocks=1 00:17:46.543 00:17:46.543 ' 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:46.543 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:17:46.543 14:01:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:51.826 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:51.826 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:51.826 Found net devices under 0000:31:00.0: cvl_0_0 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:51.826 Found net devices under 0000:31:00.1: cvl_0_1 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:51.826 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:51.827 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:51.827 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:51.827 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:51.827 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:51.827 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:51.827 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:51.827 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:51.827 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:51.827 14:01:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:51.827 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:51.827 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:51.827 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:51.827 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:52.086 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:52.086 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:52.086 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:52.086 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:52.086 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:52.086 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:17:52.086 00:17:52.086 --- 10.0.0.2 ping statistics --- 00:17:52.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.086 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:17:52.086 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:52.086 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:52.086 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:17:52.086 00:17:52.086 --- 10.0.0.1 ping statistics --- 00:17:52.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.086 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:17:52.086 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:52.086 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:17:52.086 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:52.086 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:52.086 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:52.086 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:52.086 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:52.086 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:52.086 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:52.086 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:17:52.086 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:52.086 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:52.086 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:52.086 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=893491 00:17:52.086 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 893491 00:17:52.086 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 893491 ']' 00:17:52.086 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.086 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:52.086 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.086 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:52.086 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:52.086 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:52.086 [2024-11-06 14:01:31.265748] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:17:52.086 [2024-11-06 14:01:31.265822] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.086 [2024-11-06 14:01:31.359298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.346 [2024-11-06 14:01:31.411259] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:52.346 [2024-11-06 14:01:31.411315] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:52.346 [2024-11-06 14:01:31.411324] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:52.346 [2024-11-06 14:01:31.411331] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:52.346 [2024-11-06 14:01:31.411337] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:52.346 [2024-11-06 14:01:31.412150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.918 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:52.918 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:17:52.918 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:52.918 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:52.918 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:52.918 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:52.918 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:17:52.918 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:17:52.918 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:17:52.918 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.918 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:52.918 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.918 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:17:52.918 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.918 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:52.918 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.918 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:17:52.918 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.918 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:52.918 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.918 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:17:52.918 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.918 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:52.918 Malloc0 00:17:52.918 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.918 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:17:52.918 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.918 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:53.179 [2024-11-06 14:01:32.201671] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:53.179 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.179 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:17:53.179 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.179 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:53.179 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.179 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:17:53.179 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.179 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:53.179 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.179 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:53.179 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.179 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:53.179 [2024-11-06 14:01:32.225962] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:53.179 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.179 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:53.179 [2024-11-06 14:01:32.315361] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:54.562 Initializing NVMe Controllers 00:17:54.562 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:17:54.562 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:17:54.562 Initialization complete. Launching workers. 00:17:54.562 ======================================================== 00:17:54.562 Latency(us) 00:17:54.562 Device Information : IOPS MiB/s Average min max 00:17:54.562 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32294.41 8026.99 63858.94 00:17:54.562 ======================================================== 00:17:54.562 Total : 129.00 16.12 32294.41 8026.99 63858.94 00:17:54.562 00:17:54.822 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:17:54.822 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:17:54.822 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.822 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:54.822 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.822 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:17:54.822 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:17:54.822 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:54.822 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:17:54.822 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:54.822 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:17:54.822 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:54.822 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:17:54.822 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:54.822 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:54.822 rmmod nvme_tcp 00:17:54.822 rmmod nvme_fabrics 00:17:54.822 rmmod nvme_keyring 00:17:54.822 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:54.822 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:17:54.822 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:17:54.822 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 893491 ']' 00:17:54.822 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 893491 00:17:54.822 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 893491 ']' 00:17:54.822 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 893491 00:17:54.822 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:17:54.822 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:54.822 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 893491 00:17:54.822 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:54.822 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:54.822 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 893491' 00:17:54.822 killing process with pid 893491 00:17:54.822 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 893491 00:17:54.822 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 893491 00:17:54.822 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:54.822 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:54.822 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:54.822 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:17:54.822 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:17:54.822 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:54.822 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:17:54.822 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:54.822 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:54.822 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.822 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:54.822 14:01:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.361 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:57.361 00:17:57.361 real 0m10.520s 00:17:57.361 user 0m4.356s 00:17:57.361 sys 0m4.540s 00:17:57.361 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:57.361 14:01:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:57.361 ************************************ 00:17:57.361 END TEST nvmf_wait_for_buf 00:17:57.361 ************************************ 00:17:57.361 14:01:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:17:57.361 14:01:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:17:57.361 14:01:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:17:57.361 14:01:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:17:57.361 14:01:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:17:57.361 14:01:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:02.642 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:02.642 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:18:02.642 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:02.642 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:02.642 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:02.642 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:02.642 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:02.642 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:18:02.642 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:02.642 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:18:02.642 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:18:02.642 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:18:02.642 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:18:02.642 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:18:02.642 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:18:02.642 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:02.642 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:02.642 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:02.642 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:02.642 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:02.643 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:02.643 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:02.643 Found net devices under 0000:31:00.0: cvl_0_0 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:02.643 Found net devices under 0000:31:00.1: cvl_0_1 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:02.643 ************************************ 00:18:02.643 START TEST nvmf_perf_adq 00:18:02.643 ************************************ 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:02.643 * Looking for test storage... 00:18:02.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:02.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.643 --rc genhtml_branch_coverage=1 00:18:02.643 --rc genhtml_function_coverage=1 00:18:02.643 --rc genhtml_legend=1 00:18:02.643 --rc geninfo_all_blocks=1 00:18:02.643 --rc geninfo_unexecuted_blocks=1 00:18:02.643 00:18:02.643 ' 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:02.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.643 --rc genhtml_branch_coverage=1 00:18:02.643 --rc genhtml_function_coverage=1 00:18:02.643 --rc genhtml_legend=1 00:18:02.643 --rc geninfo_all_blocks=1 00:18:02.643 --rc geninfo_unexecuted_blocks=1 00:18:02.643 00:18:02.643 ' 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:02.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.643 --rc genhtml_branch_coverage=1 00:18:02.643 --rc genhtml_function_coverage=1 00:18:02.643 --rc genhtml_legend=1 00:18:02.643 --rc geninfo_all_blocks=1 00:18:02.643 --rc geninfo_unexecuted_blocks=1 00:18:02.643 00:18:02.643 ' 00:18:02.643 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:02.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.644 --rc genhtml_branch_coverage=1 00:18:02.644 --rc genhtml_function_coverage=1 00:18:02.644 --rc genhtml_legend=1 00:18:02.644 --rc geninfo_all_blocks=1 00:18:02.644 --rc geninfo_unexecuted_blocks=1 00:18:02.644 00:18:02.644 ' 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:02.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:18:02.644 14:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:07.920 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:07.920 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:07.920 Found net devices under 0000:31:00.0: cvl_0_0 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:07.920 Found net devices under 0000:31:00.1: cvl_0_1 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:18:07.920 14:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:18:09.297 14:01:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:18:12.585 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:18:17.866 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:18:17.866 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:17.866 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:17.866 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:17.866 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:17.866 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:17.866 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.866 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:17.866 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:17.866 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:17.866 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:17.866 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:18:17.866 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:17.866 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:17.866 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:18:17.866 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:17.866 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:17.866 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:17.866 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:17.866 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:17.866 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:18:17.866 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:17.866 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:18:17.866 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:18:17.866 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:18:17.866 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:18:17.866 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:18:17.866 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:18:17.866 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:17.866 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:17.866 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:17.866 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:17.866 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:17.866 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:17.866 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:17.866 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:17.866 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:17.867 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:17.867 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:17.867 Found net devices under 0000:31:00.0: cvl_0_0 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:17.867 Found net devices under 0000:31:00.1: cvl_0_1 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:17.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:17.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:18:17.867 00:18:17.867 --- 10.0.0.2 ping statistics --- 00:18:17.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.867 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:17.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:17.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:18:17.867 00:18:17.867 --- 10.0.0.1 ping statistics --- 00:18:17.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.867 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=904713 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 904713 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 904713 ']' 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:17.867 14:01:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:17.867 [2024-11-06 14:01:56.758188] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:18:17.867 [2024-11-06 14:01:56.758238] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.868 [2024-11-06 14:01:56.842736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:17.868 [2024-11-06 14:01:56.881121] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:17.868 [2024-11-06 14:01:56.881153] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:17.868 [2024-11-06 14:01:56.881162] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:17.868 [2024-11-06 14:01:56.881168] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:17.868 [2024-11-06 14:01:56.881174] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:17.868 [2024-11-06 14:01:56.882684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.868 [2024-11-06 14:01:56.882837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.868 [2024-11-06 14:01:56.882952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.868 [2024-11-06 14:01:56.882953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:18.438 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:18.438 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:18:18.438 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:18.438 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:18.438 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:18.438 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.438 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:18:18.438 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:18:18.438 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:18:18.438 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.438 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:18.438 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.438 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:18:18.438 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:18:18.438 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.438 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:18.438 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.438 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:18:18.438 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.438 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:18.438 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.438 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:18:18.438 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.438 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:18.438 [2024-11-06 14:01:57.708730] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:18.438 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.438 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:18.438 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.438 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:18.698 Malloc1 00:18:18.698 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.698 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:18.698 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.698 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:18.698 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.698 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:18.698 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.698 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:18.698 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.698 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:18.698 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.698 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:18.698 [2024-11-06 14:01:57.771237] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:18.698 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.698 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=904778 00:18:18.698 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:18:18.698 14:01:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:20.603 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:18:20.603 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.603 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:20.603 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.603 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:18:20.603 "tick_rate": 2400000000, 00:18:20.603 "poll_groups": [ 00:18:20.603 { 00:18:20.603 "name": "nvmf_tgt_poll_group_000", 00:18:20.603 "admin_qpairs": 1, 00:18:20.603 "io_qpairs": 1, 00:18:20.603 "current_admin_qpairs": 1, 00:18:20.603 "current_io_qpairs": 1, 00:18:20.603 "pending_bdev_io": 0, 00:18:20.603 "completed_nvme_io": 25400, 00:18:20.603 "transports": [ 00:18:20.603 { 00:18:20.603 "trtype": "TCP" 00:18:20.603 } 00:18:20.603 ] 00:18:20.603 }, 00:18:20.603 { 00:18:20.603 "name": "nvmf_tgt_poll_group_001", 00:18:20.603 "admin_qpairs": 0, 00:18:20.603 "io_qpairs": 1, 00:18:20.603 "current_admin_qpairs": 0, 00:18:20.603 "current_io_qpairs": 1, 00:18:20.603 "pending_bdev_io": 0, 00:18:20.603 "completed_nvme_io": 26867, 00:18:20.603 "transports": [ 00:18:20.603 { 00:18:20.603 "trtype": "TCP" 00:18:20.603 } 00:18:20.603 ] 00:18:20.603 }, 00:18:20.603 { 00:18:20.603 "name": "nvmf_tgt_poll_group_002", 00:18:20.603 "admin_qpairs": 0, 00:18:20.603 "io_qpairs": 1, 00:18:20.603 "current_admin_qpairs": 0, 00:18:20.603 "current_io_qpairs": 1, 00:18:20.603 "pending_bdev_io": 0, 00:18:20.603 "completed_nvme_io": 27781, 00:18:20.603 "transports": [ 00:18:20.603 { 00:18:20.603 "trtype": "TCP" 00:18:20.603 } 00:18:20.603 ] 00:18:20.603 }, 00:18:20.603 { 00:18:20.603 "name": "nvmf_tgt_poll_group_003", 00:18:20.603 "admin_qpairs": 0, 00:18:20.603 "io_qpairs": 1, 00:18:20.603 "current_admin_qpairs": 0, 00:18:20.603 "current_io_qpairs": 1, 00:18:20.603 "pending_bdev_io": 0, 00:18:20.603 "completed_nvme_io": 22467, 00:18:20.603 "transports": [ 00:18:20.603 { 00:18:20.603 "trtype": "TCP" 00:18:20.603 } 00:18:20.603 ] 00:18:20.603 } 00:18:20.603 ] 00:18:20.603 }' 00:18:20.603 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:18:20.603 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:18:20.603 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:18:20.603 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:18:20.603 14:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 904778 00:18:28.728 Initializing NVMe Controllers 00:18:28.728 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:28.728 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:18:28.728 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:18:28.728 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:18:28.728 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:18:28.728 Initialization complete. Launching workers. 00:18:28.728 ======================================================== 00:18:28.728 Latency(us) 00:18:28.728 Device Information : IOPS MiB/s Average min max 00:18:28.728 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13870.60 54.18 4615.01 1060.63 8942.32 00:18:28.728 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14215.50 55.53 4502.71 1127.57 8708.67 00:18:28.728 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14251.80 55.67 4490.74 1316.54 6937.98 00:18:28.728 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13789.50 53.87 4640.91 1200.41 9994.64 00:18:28.728 ======================================================== 00:18:28.728 Total : 56127.40 219.25 4561.37 1060.63 9994.64 00:18:28.728 00:18:28.728 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:18:28.728 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:28.728 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:18:28.728 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:28.728 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:18:28.728 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:28.728 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:28.728 rmmod nvme_tcp 00:18:28.728 rmmod nvme_fabrics 00:18:28.728 rmmod nvme_keyring 00:18:28.728 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:28.728 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:18:28.728 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:18:28.728 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 904713 ']' 00:18:28.728 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 904713 00:18:28.728 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 904713 ']' 00:18:28.728 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 904713 00:18:28.728 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:18:28.728 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:28.728 14:02:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 904713 00:18:28.988 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:28.988 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:28.988 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 904713' 00:18:28.988 killing process with pid 904713 00:18:28.988 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 904713 00:18:28.988 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 904713 00:18:28.988 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:28.988 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:28.988 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:28.988 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:18:28.988 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:18:28.988 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:28.988 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:18:28.988 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:28.988 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:28.988 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.988 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:28.988 14:02:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.522 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:31.522 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:18:31.522 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:18:31.522 14:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:18:32.458 14:02:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:18:34.361 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:39.633 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:39.633 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:39.634 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:39.634 Found net devices under 0000:31:00.0: cvl_0_0 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:39.634 Found net devices under 0000:31:00.1: cvl_0_1 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:39.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:39.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.531 ms 00:18:39.634 00:18:39.634 --- 10.0.0.2 ping statistics --- 00:18:39.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.634 rtt min/avg/max/mdev = 0.531/0.531/0.531/0.000 ms 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:39.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:39.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:18:39.634 00:18:39.634 --- 10.0.0.1 ping statistics --- 00:18:39.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.634 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:18:39.634 net.core.busy_poll = 1 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:18:39.634 net.core.busy_read = 1 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=909879 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 909879 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 909879 ']' 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:39.634 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:39.893 [2024-11-06 14:02:18.916606] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:18:39.893 [2024-11-06 14:02:18.916656] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.893 [2024-11-06 14:02:19.006477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:39.893 [2024-11-06 14:02:19.043165] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:39.893 [2024-11-06 14:02:19.043199] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:39.893 [2024-11-06 14:02:19.043207] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:39.893 [2024-11-06 14:02:19.043215] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:39.893 [2024-11-06 14:02:19.043220] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:39.893 [2024-11-06 14:02:19.044710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.893 [2024-11-06 14:02:19.044824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:39.893 [2024-11-06 14:02:19.044977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.894 [2024-11-06 14:02:19.044978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:40.462 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:40.462 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:18:40.462 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:40.462 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:40.462 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:40.462 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:40.462 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:18:40.462 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:18:40.462 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.462 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:40.462 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:18:40.462 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.462 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:18:40.462 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:18:40.462 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.462 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:40.723 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.723 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:18:40.723 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.723 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:40.723 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.723 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:18:40.723 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.723 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:40.723 [2024-11-06 14:02:19.819806] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:40.723 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.723 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:40.723 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.723 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:40.723 Malloc1 00:18:40.723 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.723 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:40.723 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.723 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:40.723 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.723 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:40.723 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.723 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:40.723 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.723 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:40.723 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.723 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:40.723 [2024-11-06 14:02:19.867895] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:40.723 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.723 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=910212 00:18:40.723 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:18:40.723 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:42.630 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:18:42.630 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.630 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:42.630 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.630 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:18:42.630 "tick_rate": 2400000000, 00:18:42.630 "poll_groups": [ 00:18:42.630 { 00:18:42.630 "name": "nvmf_tgt_poll_group_000", 00:18:42.630 "admin_qpairs": 1, 00:18:42.630 "io_qpairs": 0, 00:18:42.630 "current_admin_qpairs": 1, 00:18:42.630 "current_io_qpairs": 0, 00:18:42.630 "pending_bdev_io": 0, 00:18:42.630 "completed_nvme_io": 0, 00:18:42.630 "transports": [ 00:18:42.630 { 00:18:42.630 "trtype": "TCP" 00:18:42.630 } 00:18:42.630 ] 00:18:42.630 }, 00:18:42.630 { 00:18:42.630 "name": "nvmf_tgt_poll_group_001", 00:18:42.630 "admin_qpairs": 0, 00:18:42.630 "io_qpairs": 4, 00:18:42.630 "current_admin_qpairs": 0, 00:18:42.630 "current_io_qpairs": 4, 00:18:42.630 "pending_bdev_io": 0, 00:18:42.630 "completed_nvme_io": 48784, 00:18:42.630 "transports": [ 00:18:42.630 { 00:18:42.630 "trtype": "TCP" 00:18:42.630 } 00:18:42.630 ] 00:18:42.630 }, 00:18:42.630 { 00:18:42.630 "name": "nvmf_tgt_poll_group_002", 00:18:42.630 "admin_qpairs": 0, 00:18:42.630 "io_qpairs": 0, 00:18:42.630 "current_admin_qpairs": 0, 00:18:42.630 "current_io_qpairs": 0, 00:18:42.630 "pending_bdev_io": 0, 00:18:42.630 "completed_nvme_io": 0, 00:18:42.630 "transports": [ 00:18:42.630 { 00:18:42.630 "trtype": "TCP" 00:18:42.630 } 00:18:42.630 ] 00:18:42.630 }, 00:18:42.630 { 00:18:42.630 "name": "nvmf_tgt_poll_group_003", 00:18:42.630 "admin_qpairs": 0, 00:18:42.630 "io_qpairs": 0, 00:18:42.630 "current_admin_qpairs": 0, 00:18:42.630 "current_io_qpairs": 0, 00:18:42.630 "pending_bdev_io": 0, 00:18:42.630 "completed_nvme_io": 0, 00:18:42.630 "transports": [ 00:18:42.630 { 00:18:42.630 "trtype": "TCP" 00:18:42.630 } 00:18:42.630 ] 00:18:42.630 } 00:18:42.630 ] 00:18:42.630 }' 00:18:42.630 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:18:42.630 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:18:42.951 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:18:42.951 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:18:42.951 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 910212 00:18:51.144 Initializing NVMe Controllers 00:18:51.144 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:51.144 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:18:51.144 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:18:51.144 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:18:51.144 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:18:51.144 Initialization complete. Launching workers. 00:18:51.144 ======================================================== 00:18:51.144 Latency(us) 00:18:51.144 Device Information : IOPS MiB/s Average min max 00:18:51.144 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7239.40 28.28 8842.51 1007.72 53692.71 00:18:51.144 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6836.60 26.71 9362.55 1184.05 54176.42 00:18:51.144 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5736.40 22.41 11160.39 1249.88 54358.07 00:18:51.144 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6003.10 23.45 10662.37 958.91 55610.58 00:18:51.144 ======================================================== 00:18:51.144 Total : 25815.49 100.84 9918.47 958.91 55610.58 00:18:51.144 00:18:51.144 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:18:51.144 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:51.144 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:18:51.144 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:51.144 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:18:51.144 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:51.144 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:51.144 rmmod nvme_tcp 00:18:51.144 rmmod nvme_fabrics 00:18:51.144 rmmod nvme_keyring 00:18:51.144 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:51.144 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:18:51.144 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:18:51.144 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 909879 ']' 00:18:51.144 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 909879 00:18:51.144 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 909879 ']' 00:18:51.144 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 909879 00:18:51.144 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:18:51.144 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:51.144 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 909879 00:18:51.144 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:51.144 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:51.144 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 909879' 00:18:51.144 killing process with pid 909879 00:18:51.144 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 909879 00:18:51.144 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 909879 00:18:51.144 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:51.144 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:51.144 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:51.144 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:18:51.144 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:18:51.144 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:51.144 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:18:51.144 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:51.144 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:51.144 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.145 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:51.145 14:02:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.051 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:53.051 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:18:53.051 00:18:53.051 real 0m50.589s 00:18:53.051 user 2m47.435s 00:18:53.051 sys 0m10.668s 00:18:53.051 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:53.051 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:53.051 ************************************ 00:18:53.051 END TEST nvmf_perf_adq 00:18:53.051 ************************************ 00:18:53.051 14:02:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:18:53.051 14:02:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:53.051 14:02:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:53.051 14:02:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:53.051 ************************************ 00:18:53.051 START TEST nvmf_shutdown 00:18:53.051 ************************************ 00:18:53.051 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:18:53.311 * Looking for test storage... 00:18:53.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:53.311 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:53.311 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:53.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.312 --rc genhtml_branch_coverage=1 00:18:53.312 --rc genhtml_function_coverage=1 00:18:53.312 --rc genhtml_legend=1 00:18:53.312 --rc geninfo_all_blocks=1 00:18:53.312 --rc geninfo_unexecuted_blocks=1 00:18:53.312 00:18:53.312 ' 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:53.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.312 --rc genhtml_branch_coverage=1 00:18:53.312 --rc genhtml_function_coverage=1 00:18:53.312 --rc genhtml_legend=1 00:18:53.312 --rc geninfo_all_blocks=1 00:18:53.312 --rc geninfo_unexecuted_blocks=1 00:18:53.312 00:18:53.312 ' 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:53.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.312 --rc genhtml_branch_coverage=1 00:18:53.312 --rc genhtml_function_coverage=1 00:18:53.312 --rc genhtml_legend=1 00:18:53.312 --rc geninfo_all_blocks=1 00:18:53.312 --rc geninfo_unexecuted_blocks=1 00:18:53.312 00:18:53.312 ' 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:53.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.312 --rc genhtml_branch_coverage=1 00:18:53.312 --rc genhtml_function_coverage=1 00:18:53.312 --rc genhtml_legend=1 00:18:53.312 --rc geninfo_all_blocks=1 00:18:53.312 --rc geninfo_unexecuted_blocks=1 00:18:53.312 00:18:53.312 ' 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:53.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:18:53.312 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:18:53.313 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:53.313 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:18:53.313 ************************************ 00:18:53.313 START TEST nvmf_shutdown_tc1 00:18:53.313 ************************************ 00:18:53.313 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:18:53.313 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:18:53.313 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:18:53.313 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:53.313 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:53.313 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:53.313 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:53.313 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:53.313 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.313 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:53.313 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.313 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:53.313 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:53.313 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:18:53.313 14:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:58.601 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:58.601 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:18:58.601 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:58.601 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:58.601 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:58.601 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:58.601 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:58.601 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:18:58.601 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:58.601 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:18:58.601 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:18:58.601 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:18:58.601 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:18:58.601 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:18:58.601 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:18:58.601 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:58.601 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:58.601 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:58.601 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:58.601 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:58.601 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:58.601 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:58.601 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:58.601 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:58.601 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:58.601 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:58.601 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:58.601 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:58.601 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:58.601 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:58.601 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:58.601 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:58.601 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:58.601 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:58.601 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:58.601 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:58.602 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:58.602 Found net devices under 0000:31:00.0: cvl_0_0 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:58.602 Found net devices under 0000:31:00.1: cvl_0_1 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:58.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:58.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:18:58.602 00:18:58.602 --- 10.0.0.2 ping statistics --- 00:18:58.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.602 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:58.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:58.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:18:58.602 00:18:58.602 --- 10.0.0.1 ping statistics --- 00:18:58.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.602 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=916964 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 916964 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 916964 ']' 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:58.602 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:18:58.602 [2024-11-06 14:02:37.734848] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:18:58.602 [2024-11-06 14:02:37.734897] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.602 [2024-11-06 14:02:37.806757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:58.603 [2024-11-06 14:02:37.836532] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:58.603 [2024-11-06 14:02:37.836559] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:58.603 [2024-11-06 14:02:37.836565] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:58.603 [2024-11-06 14:02:37.836570] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:58.603 [2024-11-06 14:02:37.836574] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:58.603 [2024-11-06 14:02:37.838047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:58.603 [2024-11-06 14:02:37.838202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:58.603 [2024-11-06 14:02:37.838359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:58.603 [2024-11-06 14:02:37.838468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:58.863 [2024-11-06 14:02:37.942283] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.863 14:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:58.863 Malloc1 00:18:58.863 [2024-11-06 14:02:38.027818] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:58.863 Malloc2 00:18:58.863 Malloc3 00:18:58.863 Malloc4 00:18:59.123 Malloc5 00:18:59.123 Malloc6 00:18:59.123 Malloc7 00:18:59.123 Malloc8 00:18:59.123 Malloc9 00:18:59.123 Malloc10 00:18:59.124 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.124 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:18:59.124 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:59.124 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:59.124 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=917054 00:18:59.124 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 917054 /var/tmp/bdevperf.sock 00:18:59.124 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 917054 ']' 00:18:59.124 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:59.124 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:59.124 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:59.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:59.124 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:59.124 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:59.124 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:18:59.124 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:18:59.384 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:18:59.384 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:18:59.384 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:59.384 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:59.384 { 00:18:59.384 "params": { 00:18:59.384 "name": "Nvme$subsystem", 00:18:59.384 "trtype": "$TEST_TRANSPORT", 00:18:59.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:59.384 "adrfam": "ipv4", 00:18:59.384 "trsvcid": "$NVMF_PORT", 00:18:59.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:59.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:59.384 "hdgst": ${hdgst:-false}, 00:18:59.384 "ddgst": ${ddgst:-false} 00:18:59.384 }, 00:18:59.384 "method": "bdev_nvme_attach_controller" 00:18:59.384 } 00:18:59.384 EOF 00:18:59.384 )") 00:18:59.384 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:59.384 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:59.384 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:59.384 { 00:18:59.384 "params": { 00:18:59.384 "name": "Nvme$subsystem", 00:18:59.384 "trtype": "$TEST_TRANSPORT", 00:18:59.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:59.384 "adrfam": "ipv4", 00:18:59.384 "trsvcid": "$NVMF_PORT", 00:18:59.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:59.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:59.384 "hdgst": ${hdgst:-false}, 00:18:59.384 "ddgst": ${ddgst:-false} 00:18:59.384 }, 00:18:59.384 "method": "bdev_nvme_attach_controller" 00:18:59.384 } 00:18:59.384 EOF 00:18:59.384 )") 00:18:59.384 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:59.384 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:59.384 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:59.384 { 00:18:59.384 "params": { 00:18:59.384 "name": "Nvme$subsystem", 00:18:59.384 "trtype": "$TEST_TRANSPORT", 00:18:59.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:59.384 "adrfam": "ipv4", 00:18:59.384 "trsvcid": "$NVMF_PORT", 00:18:59.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:59.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:59.384 "hdgst": ${hdgst:-false}, 00:18:59.384 "ddgst": ${ddgst:-false} 00:18:59.384 }, 00:18:59.384 "method": "bdev_nvme_attach_controller" 00:18:59.384 } 00:18:59.384 EOF 00:18:59.384 )") 00:18:59.384 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:59.384 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:59.384 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:59.384 { 00:18:59.384 "params": { 00:18:59.384 "name": "Nvme$subsystem", 00:18:59.384 "trtype": "$TEST_TRANSPORT", 00:18:59.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:59.384 "adrfam": "ipv4", 00:18:59.384 "trsvcid": "$NVMF_PORT", 00:18:59.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:59.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:59.384 "hdgst": ${hdgst:-false}, 00:18:59.384 "ddgst": ${ddgst:-false} 00:18:59.384 }, 00:18:59.384 "method": "bdev_nvme_attach_controller" 00:18:59.384 } 00:18:59.384 EOF 00:18:59.384 )") 00:18:59.385 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:59.385 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:59.385 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:59.385 { 00:18:59.385 "params": { 00:18:59.385 "name": "Nvme$subsystem", 00:18:59.385 "trtype": "$TEST_TRANSPORT", 00:18:59.385 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:59.385 "adrfam": "ipv4", 00:18:59.385 "trsvcid": "$NVMF_PORT", 00:18:59.385 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:59.385 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:59.385 "hdgst": ${hdgst:-false}, 00:18:59.385 "ddgst": ${ddgst:-false} 00:18:59.385 }, 00:18:59.385 "method": "bdev_nvme_attach_controller" 00:18:59.385 } 00:18:59.385 EOF 00:18:59.385 )") 00:18:59.385 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:59.385 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:59.385 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:59.385 { 00:18:59.385 "params": { 00:18:59.385 "name": "Nvme$subsystem", 00:18:59.385 "trtype": "$TEST_TRANSPORT", 00:18:59.385 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:59.385 "adrfam": "ipv4", 00:18:59.385 "trsvcid": "$NVMF_PORT", 00:18:59.385 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:59.385 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:59.385 "hdgst": ${hdgst:-false}, 00:18:59.385 "ddgst": ${ddgst:-false} 00:18:59.385 }, 00:18:59.385 "method": "bdev_nvme_attach_controller" 00:18:59.385 } 00:18:59.385 EOF 00:18:59.385 )") 00:18:59.385 [2024-11-06 14:02:38.437930] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:18:59.385 [2024-11-06 14:02:38.437983] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:59.385 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:59.385 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:59.385 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:59.385 { 00:18:59.385 "params": { 00:18:59.385 "name": "Nvme$subsystem", 00:18:59.385 "trtype": "$TEST_TRANSPORT", 00:18:59.385 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:59.385 "adrfam": "ipv4", 00:18:59.385 "trsvcid": "$NVMF_PORT", 00:18:59.385 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:59.385 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:59.385 "hdgst": ${hdgst:-false}, 00:18:59.385 "ddgst": ${ddgst:-false} 00:18:59.385 }, 00:18:59.385 "method": "bdev_nvme_attach_controller" 00:18:59.385 } 00:18:59.385 EOF 00:18:59.385 )") 00:18:59.385 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:59.385 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:59.385 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:59.385 { 00:18:59.385 "params": { 00:18:59.385 "name": "Nvme$subsystem", 00:18:59.385 "trtype": "$TEST_TRANSPORT", 00:18:59.385 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:59.385 "adrfam": "ipv4", 00:18:59.385 "trsvcid": "$NVMF_PORT", 00:18:59.385 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:59.385 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:59.385 "hdgst": ${hdgst:-false}, 00:18:59.385 "ddgst": ${ddgst:-false} 00:18:59.385 }, 00:18:59.385 "method": "bdev_nvme_attach_controller" 00:18:59.385 } 00:18:59.385 EOF 00:18:59.385 )") 00:18:59.385 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:59.385 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:59.385 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:59.385 { 00:18:59.385 "params": { 00:18:59.385 "name": "Nvme$subsystem", 00:18:59.385 "trtype": "$TEST_TRANSPORT", 00:18:59.385 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:59.385 "adrfam": "ipv4", 00:18:59.385 "trsvcid": "$NVMF_PORT", 00:18:59.385 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:59.385 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:59.385 "hdgst": ${hdgst:-false}, 00:18:59.385 "ddgst": ${ddgst:-false} 00:18:59.385 }, 00:18:59.385 "method": "bdev_nvme_attach_controller" 00:18:59.385 } 00:18:59.385 EOF 00:18:59.385 )") 00:18:59.385 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:59.385 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:59.385 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:59.385 { 00:18:59.385 "params": { 00:18:59.385 "name": "Nvme$subsystem", 00:18:59.385 "trtype": "$TEST_TRANSPORT", 00:18:59.385 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:59.385 "adrfam": "ipv4", 00:18:59.385 "trsvcid": "$NVMF_PORT", 00:18:59.385 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:59.385 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:59.385 "hdgst": ${hdgst:-false}, 00:18:59.385 "ddgst": ${ddgst:-false} 00:18:59.385 }, 00:18:59.385 "method": "bdev_nvme_attach_controller" 00:18:59.385 } 00:18:59.385 EOF 00:18:59.385 )") 00:18:59.385 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:59.385 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:18:59.385 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:18:59.385 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:59.385 "params": { 00:18:59.385 "name": "Nvme1", 00:18:59.385 "trtype": "tcp", 00:18:59.385 "traddr": "10.0.0.2", 00:18:59.385 "adrfam": "ipv4", 00:18:59.385 "trsvcid": "4420", 00:18:59.385 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.385 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:59.385 "hdgst": false, 00:18:59.385 "ddgst": false 00:18:59.385 }, 00:18:59.385 "method": "bdev_nvme_attach_controller" 00:18:59.385 },{ 00:18:59.385 "params": { 00:18:59.385 "name": "Nvme2", 00:18:59.385 "trtype": "tcp", 00:18:59.385 "traddr": "10.0.0.2", 00:18:59.385 "adrfam": "ipv4", 00:18:59.385 "trsvcid": "4420", 00:18:59.385 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:59.385 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:59.385 "hdgst": false, 00:18:59.385 "ddgst": false 00:18:59.385 }, 00:18:59.385 "method": "bdev_nvme_attach_controller" 00:18:59.385 },{ 00:18:59.385 "params": { 00:18:59.385 "name": "Nvme3", 00:18:59.385 "trtype": "tcp", 00:18:59.385 "traddr": "10.0.0.2", 00:18:59.385 "adrfam": "ipv4", 00:18:59.385 "trsvcid": "4420", 00:18:59.385 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:18:59.385 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:18:59.385 "hdgst": false, 00:18:59.385 "ddgst": false 00:18:59.385 }, 00:18:59.385 "method": "bdev_nvme_attach_controller" 00:18:59.385 },{ 00:18:59.385 "params": { 00:18:59.385 "name": "Nvme4", 00:18:59.385 "trtype": "tcp", 00:18:59.385 "traddr": "10.0.0.2", 00:18:59.385 "adrfam": "ipv4", 00:18:59.385 "trsvcid": "4420", 00:18:59.385 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:18:59.385 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:18:59.385 "hdgst": false, 00:18:59.385 "ddgst": false 00:18:59.385 }, 00:18:59.385 "method": "bdev_nvme_attach_controller" 00:18:59.385 },{ 00:18:59.385 "params": { 00:18:59.385 "name": "Nvme5", 00:18:59.385 "trtype": "tcp", 00:18:59.385 "traddr": "10.0.0.2", 00:18:59.385 "adrfam": "ipv4", 00:18:59.385 "trsvcid": "4420", 00:18:59.385 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:18:59.385 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:18:59.385 "hdgst": false, 00:18:59.385 "ddgst": false 00:18:59.385 }, 00:18:59.385 "method": "bdev_nvme_attach_controller" 00:18:59.385 },{ 00:18:59.385 "params": { 00:18:59.385 "name": "Nvme6", 00:18:59.385 "trtype": "tcp", 00:18:59.385 "traddr": "10.0.0.2", 00:18:59.385 "adrfam": "ipv4", 00:18:59.385 "trsvcid": "4420", 00:18:59.385 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:18:59.385 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:18:59.385 "hdgst": false, 00:18:59.385 "ddgst": false 00:18:59.385 }, 00:18:59.385 "method": "bdev_nvme_attach_controller" 00:18:59.385 },{ 00:18:59.385 "params": { 00:18:59.385 "name": "Nvme7", 00:18:59.385 "trtype": "tcp", 00:18:59.385 "traddr": "10.0.0.2", 00:18:59.385 "adrfam": "ipv4", 00:18:59.385 "trsvcid": "4420", 00:18:59.385 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:18:59.385 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:18:59.385 "hdgst": false, 00:18:59.385 "ddgst": false 00:18:59.385 }, 00:18:59.385 "method": "bdev_nvme_attach_controller" 00:18:59.385 },{ 00:18:59.385 "params": { 00:18:59.385 "name": "Nvme8", 00:18:59.385 "trtype": "tcp", 00:18:59.385 "traddr": "10.0.0.2", 00:18:59.385 "adrfam": "ipv4", 00:18:59.385 "trsvcid": "4420", 00:18:59.385 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:18:59.385 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:18:59.385 "hdgst": false, 00:18:59.385 "ddgst": false 00:18:59.385 }, 00:18:59.385 "method": "bdev_nvme_attach_controller" 00:18:59.385 },{ 00:18:59.385 "params": { 00:18:59.386 "name": "Nvme9", 00:18:59.386 "trtype": "tcp", 00:18:59.386 "traddr": "10.0.0.2", 00:18:59.386 "adrfam": "ipv4", 00:18:59.386 "trsvcid": "4420", 00:18:59.386 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:18:59.386 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:18:59.386 "hdgst": false, 00:18:59.386 "ddgst": false 00:18:59.386 }, 00:18:59.386 "method": "bdev_nvme_attach_controller" 00:18:59.386 },{ 00:18:59.386 "params": { 00:18:59.386 "name": "Nvme10", 00:18:59.386 "trtype": "tcp", 00:18:59.386 "traddr": "10.0.0.2", 00:18:59.386 "adrfam": "ipv4", 00:18:59.386 "trsvcid": "4420", 00:18:59.386 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:18:59.386 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:18:59.386 "hdgst": false, 00:18:59.386 "ddgst": false 00:18:59.386 }, 00:18:59.386 "method": "bdev_nvme_attach_controller" 00:18:59.386 }' 00:18:59.386 [2024-11-06 14:02:38.518036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.386 [2024-11-06 14:02:38.554478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.768 14:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:00.768 14:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:19:00.768 14:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:00.768 14:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.768 14:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:00.768 14:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.768 14:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 917054 00:19:00.768 14:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:19:00.768 14:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:19:01.708 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 917054 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:19:01.708 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 916964 00:19:01.708 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:01.708 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:01.708 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:19:01.708 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:19:01.708 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:01.708 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:01.708 { 00:19:01.708 "params": { 00:19:01.708 "name": "Nvme$subsystem", 00:19:01.708 "trtype": "$TEST_TRANSPORT", 00:19:01.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.708 "adrfam": "ipv4", 00:19:01.708 "trsvcid": "$NVMF_PORT", 00:19:01.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.708 "hdgst": ${hdgst:-false}, 00:19:01.708 "ddgst": ${ddgst:-false} 00:19:01.708 }, 00:19:01.708 "method": "bdev_nvme_attach_controller" 00:19:01.708 } 00:19:01.708 EOF 00:19:01.708 )") 00:19:01.708 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:01.708 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:01.708 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:01.708 { 00:19:01.708 "params": { 00:19:01.708 "name": "Nvme$subsystem", 00:19:01.708 "trtype": "$TEST_TRANSPORT", 00:19:01.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.708 "adrfam": "ipv4", 00:19:01.708 "trsvcid": "$NVMF_PORT", 00:19:01.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.708 "hdgst": ${hdgst:-false}, 00:19:01.708 "ddgst": ${ddgst:-false} 00:19:01.708 }, 00:19:01.708 "method": "bdev_nvme_attach_controller" 00:19:01.708 } 00:19:01.708 EOF 00:19:01.708 )") 00:19:01.708 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:01.708 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:01.708 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:01.708 { 00:19:01.708 "params": { 00:19:01.708 "name": "Nvme$subsystem", 00:19:01.708 "trtype": "$TEST_TRANSPORT", 00:19:01.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.708 "adrfam": "ipv4", 00:19:01.708 "trsvcid": "$NVMF_PORT", 00:19:01.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.708 "hdgst": ${hdgst:-false}, 00:19:01.708 "ddgst": ${ddgst:-false} 00:19:01.708 }, 00:19:01.708 "method": "bdev_nvme_attach_controller" 00:19:01.708 } 00:19:01.708 EOF 00:19:01.708 )") 00:19:01.708 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:01.708 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:01.708 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:01.708 { 00:19:01.708 "params": { 00:19:01.708 "name": "Nvme$subsystem", 00:19:01.708 "trtype": "$TEST_TRANSPORT", 00:19:01.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.708 "adrfam": "ipv4", 00:19:01.708 "trsvcid": "$NVMF_PORT", 00:19:01.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.708 "hdgst": ${hdgst:-false}, 00:19:01.708 "ddgst": ${ddgst:-false} 00:19:01.708 }, 00:19:01.708 "method": "bdev_nvme_attach_controller" 00:19:01.708 } 00:19:01.708 EOF 00:19:01.708 )") 00:19:01.708 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:01.708 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:01.708 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:01.708 { 00:19:01.708 "params": { 00:19:01.708 "name": "Nvme$subsystem", 00:19:01.708 "trtype": "$TEST_TRANSPORT", 00:19:01.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.708 "adrfam": "ipv4", 00:19:01.708 "trsvcid": "$NVMF_PORT", 00:19:01.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.708 "hdgst": ${hdgst:-false}, 00:19:01.708 "ddgst": ${ddgst:-false} 00:19:01.708 }, 00:19:01.708 "method": "bdev_nvme_attach_controller" 00:19:01.708 } 00:19:01.708 EOF 00:19:01.708 )") 00:19:01.708 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:01.708 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:01.708 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:01.708 { 00:19:01.708 "params": { 00:19:01.709 "name": "Nvme$subsystem", 00:19:01.709 "trtype": "$TEST_TRANSPORT", 00:19:01.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.709 "adrfam": "ipv4", 00:19:01.709 "trsvcid": "$NVMF_PORT", 00:19:01.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.709 "hdgst": ${hdgst:-false}, 00:19:01.709 "ddgst": ${ddgst:-false} 00:19:01.709 }, 00:19:01.709 "method": "bdev_nvme_attach_controller" 00:19:01.709 } 00:19:01.709 EOF 00:19:01.709 )") 00:19:01.709 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:01.709 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:01.709 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:01.709 { 00:19:01.709 "params": { 00:19:01.709 "name": "Nvme$subsystem", 00:19:01.709 "trtype": "$TEST_TRANSPORT", 00:19:01.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.709 "adrfam": "ipv4", 00:19:01.709 "trsvcid": "$NVMF_PORT", 00:19:01.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.709 "hdgst": ${hdgst:-false}, 00:19:01.709 "ddgst": ${ddgst:-false} 00:19:01.709 }, 00:19:01.709 "method": "bdev_nvme_attach_controller" 00:19:01.709 } 00:19:01.709 EOF 00:19:01.709 )") 00:19:01.709 [2024-11-06 14:02:40.921393] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:19:01.709 [2024-11-06 14:02:40.921449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid917738 ] 00:19:01.709 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:01.709 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:01.709 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:01.709 { 00:19:01.709 "params": { 00:19:01.709 "name": "Nvme$subsystem", 00:19:01.709 "trtype": "$TEST_TRANSPORT", 00:19:01.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.709 "adrfam": "ipv4", 00:19:01.709 "trsvcid": "$NVMF_PORT", 00:19:01.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.709 "hdgst": ${hdgst:-false}, 00:19:01.709 "ddgst": ${ddgst:-false} 00:19:01.709 }, 00:19:01.709 "method": "bdev_nvme_attach_controller" 00:19:01.709 } 00:19:01.709 EOF 00:19:01.709 )") 00:19:01.709 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:01.709 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:01.709 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:01.709 { 00:19:01.709 "params": { 00:19:01.709 "name": "Nvme$subsystem", 00:19:01.709 "trtype": "$TEST_TRANSPORT", 00:19:01.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.709 "adrfam": "ipv4", 00:19:01.709 "trsvcid": "$NVMF_PORT", 00:19:01.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.709 "hdgst": ${hdgst:-false}, 00:19:01.709 "ddgst": ${ddgst:-false} 00:19:01.709 }, 00:19:01.709 "method": "bdev_nvme_attach_controller" 00:19:01.709 } 00:19:01.709 EOF 00:19:01.709 )") 00:19:01.709 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:01.709 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:01.709 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:01.709 { 00:19:01.709 "params": { 00:19:01.709 "name": "Nvme$subsystem", 00:19:01.709 "trtype": "$TEST_TRANSPORT", 00:19:01.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.709 "adrfam": "ipv4", 00:19:01.709 "trsvcid": "$NVMF_PORT", 00:19:01.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.709 "hdgst": ${hdgst:-false}, 00:19:01.709 "ddgst": ${ddgst:-false} 00:19:01.709 }, 00:19:01.709 "method": "bdev_nvme_attach_controller" 00:19:01.709 } 00:19:01.709 EOF 00:19:01.709 )") 00:19:01.709 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:01.709 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:19:01.709 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:19:01.709 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:01.709 "params": { 00:19:01.709 "name": "Nvme1", 00:19:01.709 "trtype": "tcp", 00:19:01.709 "traddr": "10.0.0.2", 00:19:01.709 "adrfam": "ipv4", 00:19:01.709 "trsvcid": "4420", 00:19:01.709 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.709 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:01.709 "hdgst": false, 00:19:01.709 "ddgst": false 00:19:01.709 }, 00:19:01.709 "method": "bdev_nvme_attach_controller" 00:19:01.709 },{ 00:19:01.709 "params": { 00:19:01.709 "name": "Nvme2", 00:19:01.709 "trtype": "tcp", 00:19:01.709 "traddr": "10.0.0.2", 00:19:01.709 "adrfam": "ipv4", 00:19:01.709 "trsvcid": "4420", 00:19:01.709 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:01.709 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:01.709 "hdgst": false, 00:19:01.709 "ddgst": false 00:19:01.709 }, 00:19:01.709 "method": "bdev_nvme_attach_controller" 00:19:01.709 },{ 00:19:01.709 "params": { 00:19:01.709 "name": "Nvme3", 00:19:01.709 "trtype": "tcp", 00:19:01.709 "traddr": "10.0.0.2", 00:19:01.709 "adrfam": "ipv4", 00:19:01.709 "trsvcid": "4420", 00:19:01.709 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:01.709 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:01.709 "hdgst": false, 00:19:01.709 "ddgst": false 00:19:01.709 }, 00:19:01.709 "method": "bdev_nvme_attach_controller" 00:19:01.709 },{ 00:19:01.709 "params": { 00:19:01.709 "name": "Nvme4", 00:19:01.709 "trtype": "tcp", 00:19:01.709 "traddr": "10.0.0.2", 00:19:01.709 "adrfam": "ipv4", 00:19:01.709 "trsvcid": "4420", 00:19:01.709 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:01.709 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:01.709 "hdgst": false, 00:19:01.709 "ddgst": false 00:19:01.709 }, 00:19:01.709 "method": "bdev_nvme_attach_controller" 00:19:01.709 },{ 00:19:01.709 "params": { 00:19:01.709 "name": "Nvme5", 00:19:01.709 "trtype": "tcp", 00:19:01.709 "traddr": "10.0.0.2", 00:19:01.709 "adrfam": "ipv4", 00:19:01.709 "trsvcid": "4420", 00:19:01.709 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:01.709 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:01.709 "hdgst": false, 00:19:01.709 "ddgst": false 00:19:01.709 }, 00:19:01.709 "method": "bdev_nvme_attach_controller" 00:19:01.709 },{ 00:19:01.709 "params": { 00:19:01.709 "name": "Nvme6", 00:19:01.709 "trtype": "tcp", 00:19:01.709 "traddr": "10.0.0.2", 00:19:01.709 "adrfam": "ipv4", 00:19:01.709 "trsvcid": "4420", 00:19:01.709 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:01.709 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:01.709 "hdgst": false, 00:19:01.709 "ddgst": false 00:19:01.709 }, 00:19:01.709 "method": "bdev_nvme_attach_controller" 00:19:01.709 },{ 00:19:01.709 "params": { 00:19:01.709 "name": "Nvme7", 00:19:01.709 "trtype": "tcp", 00:19:01.709 "traddr": "10.0.0.2", 00:19:01.709 "adrfam": "ipv4", 00:19:01.709 "trsvcid": "4420", 00:19:01.709 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:01.709 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:01.709 "hdgst": false, 00:19:01.709 "ddgst": false 00:19:01.709 }, 00:19:01.709 "method": "bdev_nvme_attach_controller" 00:19:01.709 },{ 00:19:01.709 "params": { 00:19:01.709 "name": "Nvme8", 00:19:01.709 "trtype": "tcp", 00:19:01.709 "traddr": "10.0.0.2", 00:19:01.709 "adrfam": "ipv4", 00:19:01.709 "trsvcid": "4420", 00:19:01.709 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:01.709 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:01.709 "hdgst": false, 00:19:01.709 "ddgst": false 00:19:01.709 }, 00:19:01.709 "method": "bdev_nvme_attach_controller" 00:19:01.709 },{ 00:19:01.709 "params": { 00:19:01.709 "name": "Nvme9", 00:19:01.709 "trtype": "tcp", 00:19:01.709 "traddr": "10.0.0.2", 00:19:01.709 "adrfam": "ipv4", 00:19:01.709 "trsvcid": "4420", 00:19:01.709 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:01.709 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:01.709 "hdgst": false, 00:19:01.709 "ddgst": false 00:19:01.709 }, 00:19:01.709 "method": "bdev_nvme_attach_controller" 00:19:01.709 },{ 00:19:01.709 "params": { 00:19:01.709 "name": "Nvme10", 00:19:01.709 "trtype": "tcp", 00:19:01.709 "traddr": "10.0.0.2", 00:19:01.709 "adrfam": "ipv4", 00:19:01.709 "trsvcid": "4420", 00:19:01.709 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:01.709 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:01.709 "hdgst": false, 00:19:01.709 "ddgst": false 00:19:01.709 }, 00:19:01.710 "method": "bdev_nvme_attach_controller" 00:19:01.710 }' 00:19:01.970 [2024-11-06 14:02:41.001767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.970 [2024-11-06 14:02:41.038349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.350 Running I/O for 1 seconds... 00:19:04.292 2187.00 IOPS, 136.69 MiB/s 00:19:04.292 Latency(us) 00:19:04.292 [2024-11-06T13:02:43.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:04.292 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:04.292 Verification LBA range: start 0x0 length 0x400 00:19:04.292 Nvme1n1 : 1.07 246.87 15.43 0.00 0.00 252909.31 5079.04 253405.87 00:19:04.292 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:04.292 Verification LBA range: start 0x0 length 0x400 00:19:04.292 Nvme2n1 : 1.15 281.73 17.61 0.00 0.00 220279.05 3358.72 223696.21 00:19:04.292 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:04.292 Verification LBA range: start 0x0 length 0x400 00:19:04.292 Nvme3n1 : 1.15 277.85 17.37 0.00 0.00 220268.71 16711.68 225443.84 00:19:04.292 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:04.292 Verification LBA range: start 0x0 length 0x400 00:19:04.292 Nvme4n1 : 1.15 277.41 17.34 0.00 0.00 216892.25 24029.87 263891.63 00:19:04.292 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:04.292 Verification LBA range: start 0x0 length 0x400 00:19:04.292 Nvme5n1 : 1.14 283.11 17.69 0.00 0.00 204376.16 12397.23 255153.49 00:19:04.292 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:04.292 Verification LBA range: start 0x0 length 0x400 00:19:04.292 Nvme6n1 : 1.17 272.93 17.06 0.00 0.00 213323.09 16274.77 239424.85 00:19:04.292 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:04.292 Verification LBA range: start 0x0 length 0x400 00:19:04.292 Nvme7n1 : 1.16 275.23 17.20 0.00 0.00 207490.13 12834.13 232434.35 00:19:04.292 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:04.292 Verification LBA range: start 0x0 length 0x400 00:19:04.292 Nvme8n1 : 1.17 328.94 20.56 0.00 0.00 170485.48 12178.77 253405.87 00:19:04.292 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:04.292 Verification LBA range: start 0x0 length 0x400 00:19:04.292 Nvme9n1 : 1.17 277.28 17.33 0.00 0.00 198206.49 2157.23 242920.11 00:19:04.292 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:04.292 Verification LBA range: start 0x0 length 0x400 00:19:04.292 Nvme10n1 : 1.18 275.79 17.24 0.00 0.00 195921.63 1597.44 269134.51 00:19:04.292 [2024-11-06T13:02:43.576Z] =================================================================================================================== 00:19:04.292 [2024-11-06T13:02:43.576Z] Total : 2797.14 174.82 0.00 0.00 208441.39 1597.44 269134.51 00:19:04.551 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:19:04.551 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:19:04.551 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:04.551 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:04.551 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:19:04.551 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:04.551 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:19:04.551 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:04.551 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:19:04.551 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:04.551 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:04.551 rmmod nvme_tcp 00:19:04.551 rmmod nvme_fabrics 00:19:04.551 rmmod nvme_keyring 00:19:04.551 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:04.551 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:19:04.551 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:19:04.551 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 916964 ']' 00:19:04.552 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 916964 00:19:04.552 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 916964 ']' 00:19:04.552 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 916964 00:19:04.552 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:19:04.552 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:04.552 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 916964 00:19:04.552 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:04.552 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:04.552 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 916964' 00:19:04.552 killing process with pid 916964 00:19:04.552 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 916964 00:19:04.552 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 916964 00:19:04.811 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:04.811 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:04.811 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:04.811 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:19:04.811 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:04.811 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:19:04.811 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:19:04.811 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:04.811 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:04.811 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.811 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:04.811 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.349 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:07.349 00:19:07.349 real 0m13.546s 00:19:07.349 user 0m29.586s 00:19:07.349 sys 0m4.886s 00:19:07.349 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:07.349 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:07.349 ************************************ 00:19:07.349 END TEST nvmf_shutdown_tc1 00:19:07.349 ************************************ 00:19:07.349 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:19:07.349 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:19:07.349 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:07.349 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:07.349 ************************************ 00:19:07.349 START TEST nvmf_shutdown_tc2 00:19:07.349 ************************************ 00:19:07.349 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:19:07.349 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:19:07.349 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:19:07.349 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:07.349 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:07.349 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:07.349 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:07.349 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:07.349 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.349 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:07.349 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.349 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:07.349 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:07.349 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:19:07.349 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:07.349 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:07.350 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:07.350 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:07.350 Found net devices under 0000:31:00.0: cvl_0_0 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:07.350 Found net devices under 0000:31:00.1: cvl_0_1 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:07.350 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:07.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:07.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:19:07.351 00:19:07.351 --- 10.0.0.2 ping statistics --- 00:19:07.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.351 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:19:07.351 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:07.351 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:07.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:19:07.351 00:19:07.351 --- 10.0.0.1 ping statistics --- 00:19:07.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.351 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:19:07.351 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:07.351 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:19:07.351 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:07.351 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:07.351 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:07.351 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:07.351 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:07.351 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:07.351 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:07.351 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:19:07.351 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:07.351 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:07.351 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:07.351 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=918864 00:19:07.351 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 918864 00:19:07.351 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 918864 ']' 00:19:07.351 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.351 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:07.351 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.351 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:07.351 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:07.351 14:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:07.351 [2024-11-06 14:02:46.375619] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:19:07.351 [2024-11-06 14:02:46.375670] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.351 [2024-11-06 14:02:46.449057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:07.351 [2024-11-06 14:02:46.480976] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:07.351 [2024-11-06 14:02:46.481004] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:07.351 [2024-11-06 14:02:46.481010] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:07.351 [2024-11-06 14:02:46.481015] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:07.351 [2024-11-06 14:02:46.481018] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:07.351 [2024-11-06 14:02:46.482351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:07.351 [2024-11-06 14:02:46.482546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:07.351 [2024-11-06 14:02:46.482716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.351 [2024-11-06 14:02:46.482716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:07.920 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:07.920 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:19:07.921 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:07.921 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:07.921 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:07.921 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:07.921 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:07.921 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.921 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:07.921 [2024-11-06 14:02:47.180874] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:07.921 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.921 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:19:07.921 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:19:07.921 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:07.921 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:07.921 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:07.921 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:07.921 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:07.921 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:07.921 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:07.921 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:07.921 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:07.921 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:07.921 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:08.180 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:08.180 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:08.180 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:08.180 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:08.180 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:08.180 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:08.180 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:08.180 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:08.180 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:08.180 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:08.181 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:08.181 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:19:08.181 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:19:08.181 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.181 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:08.181 Malloc1 00:19:08.181 [2024-11-06 14:02:47.266914] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:08.181 Malloc2 00:19:08.181 Malloc3 00:19:08.181 Malloc4 00:19:08.181 Malloc5 00:19:08.181 Malloc6 00:19:08.441 Malloc7 00:19:08.441 Malloc8 00:19:08.441 Malloc9 00:19:08.441 Malloc10 00:19:08.441 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.441 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:19:08.441 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:08.441 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:08.441 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=919245 00:19:08.441 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 919245 /var/tmp/bdevperf.sock 00:19:08.441 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 919245 ']' 00:19:08.441 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:08.441 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:08.441 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:08.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:08.441 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:08.441 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:08.441 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:08.441 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:08.441 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:19:08.441 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:19:08.441 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:08.441 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:08.441 { 00:19:08.441 "params": { 00:19:08.441 "name": "Nvme$subsystem", 00:19:08.441 "trtype": "$TEST_TRANSPORT", 00:19:08.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:08.441 "adrfam": "ipv4", 00:19:08.441 "trsvcid": "$NVMF_PORT", 00:19:08.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:08.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:08.441 "hdgst": ${hdgst:-false}, 00:19:08.441 "ddgst": ${ddgst:-false} 00:19:08.441 }, 00:19:08.441 "method": "bdev_nvme_attach_controller" 00:19:08.441 } 00:19:08.441 EOF 00:19:08.441 )") 00:19:08.441 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:08.441 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:08.441 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:08.441 { 00:19:08.441 "params": { 00:19:08.441 "name": "Nvme$subsystem", 00:19:08.441 "trtype": "$TEST_TRANSPORT", 00:19:08.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:08.441 "adrfam": "ipv4", 00:19:08.441 "trsvcid": "$NVMF_PORT", 00:19:08.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:08.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:08.441 "hdgst": ${hdgst:-false}, 00:19:08.441 "ddgst": ${ddgst:-false} 00:19:08.441 }, 00:19:08.441 "method": "bdev_nvme_attach_controller" 00:19:08.441 } 00:19:08.441 EOF 00:19:08.441 )") 00:19:08.441 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:08.441 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:08.441 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:08.441 { 00:19:08.441 "params": { 00:19:08.441 "name": "Nvme$subsystem", 00:19:08.441 "trtype": "$TEST_TRANSPORT", 00:19:08.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:08.441 "adrfam": "ipv4", 00:19:08.441 "trsvcid": "$NVMF_PORT", 00:19:08.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:08.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:08.441 "hdgst": ${hdgst:-false}, 00:19:08.441 "ddgst": ${ddgst:-false} 00:19:08.441 }, 00:19:08.441 "method": "bdev_nvme_attach_controller" 00:19:08.441 } 00:19:08.441 EOF 00:19:08.441 )") 00:19:08.441 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:08.441 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:08.441 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:08.441 { 00:19:08.441 "params": { 00:19:08.441 "name": "Nvme$subsystem", 00:19:08.441 "trtype": "$TEST_TRANSPORT", 00:19:08.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:08.441 "adrfam": "ipv4", 00:19:08.441 "trsvcid": "$NVMF_PORT", 00:19:08.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:08.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:08.441 "hdgst": ${hdgst:-false}, 00:19:08.441 "ddgst": ${ddgst:-false} 00:19:08.441 }, 00:19:08.441 "method": "bdev_nvme_attach_controller" 00:19:08.441 } 00:19:08.441 EOF 00:19:08.441 )") 00:19:08.441 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:08.441 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:08.441 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:08.441 { 00:19:08.441 "params": { 00:19:08.441 "name": "Nvme$subsystem", 00:19:08.441 "trtype": "$TEST_TRANSPORT", 00:19:08.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:08.441 "adrfam": "ipv4", 00:19:08.441 "trsvcid": "$NVMF_PORT", 00:19:08.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:08.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:08.441 "hdgst": ${hdgst:-false}, 00:19:08.441 "ddgst": ${ddgst:-false} 00:19:08.441 }, 00:19:08.441 "method": "bdev_nvme_attach_controller" 00:19:08.441 } 00:19:08.441 EOF 00:19:08.441 )") 00:19:08.441 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:08.441 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:08.441 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:08.441 { 00:19:08.441 "params": { 00:19:08.441 "name": "Nvme$subsystem", 00:19:08.441 "trtype": "$TEST_TRANSPORT", 00:19:08.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:08.441 "adrfam": "ipv4", 00:19:08.441 "trsvcid": "$NVMF_PORT", 00:19:08.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:08.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:08.441 "hdgst": ${hdgst:-false}, 00:19:08.441 "ddgst": ${ddgst:-false} 00:19:08.441 }, 00:19:08.441 "method": "bdev_nvme_attach_controller" 00:19:08.441 } 00:19:08.441 EOF 00:19:08.441 )") 00:19:08.441 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:08.441 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:08.441 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:08.441 { 00:19:08.441 "params": { 00:19:08.441 "name": "Nvme$subsystem", 00:19:08.441 "trtype": "$TEST_TRANSPORT", 00:19:08.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:08.441 "adrfam": "ipv4", 00:19:08.441 "trsvcid": "$NVMF_PORT", 00:19:08.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:08.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:08.441 "hdgst": ${hdgst:-false}, 00:19:08.441 "ddgst": ${ddgst:-false} 00:19:08.441 }, 00:19:08.441 "method": "bdev_nvme_attach_controller" 00:19:08.441 } 00:19:08.441 EOF 00:19:08.441 )") 00:19:08.442 [2024-11-06 14:02:47.677486] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:19:08.442 [2024-11-06 14:02:47.677538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid919245 ] 00:19:08.442 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:08.442 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:08.442 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:08.442 { 00:19:08.442 "params": { 00:19:08.442 "name": "Nvme$subsystem", 00:19:08.442 "trtype": "$TEST_TRANSPORT", 00:19:08.442 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:08.442 "adrfam": "ipv4", 00:19:08.442 "trsvcid": "$NVMF_PORT", 00:19:08.442 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:08.442 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:08.442 "hdgst": ${hdgst:-false}, 00:19:08.442 "ddgst": ${ddgst:-false} 00:19:08.442 }, 00:19:08.442 "method": "bdev_nvme_attach_controller" 00:19:08.442 } 00:19:08.442 EOF 00:19:08.442 )") 00:19:08.442 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:08.442 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:08.442 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:08.442 { 00:19:08.442 "params": { 00:19:08.442 "name": "Nvme$subsystem", 00:19:08.442 "trtype": "$TEST_TRANSPORT", 00:19:08.442 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:08.442 "adrfam": "ipv4", 00:19:08.442 "trsvcid": "$NVMF_PORT", 00:19:08.442 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:08.442 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:08.442 "hdgst": ${hdgst:-false}, 00:19:08.442 "ddgst": ${ddgst:-false} 00:19:08.442 }, 00:19:08.442 "method": "bdev_nvme_attach_controller" 00:19:08.442 } 00:19:08.442 EOF 00:19:08.442 )") 00:19:08.442 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:08.442 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:08.442 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:08.442 { 00:19:08.442 "params": { 00:19:08.442 "name": "Nvme$subsystem", 00:19:08.442 "trtype": "$TEST_TRANSPORT", 00:19:08.442 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:08.442 "adrfam": "ipv4", 00:19:08.442 "trsvcid": "$NVMF_PORT", 00:19:08.442 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:08.442 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:08.442 "hdgst": ${hdgst:-false}, 00:19:08.442 "ddgst": ${ddgst:-false} 00:19:08.442 }, 00:19:08.442 "method": "bdev_nvme_attach_controller" 00:19:08.442 } 00:19:08.442 EOF 00:19:08.442 )") 00:19:08.442 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:19:08.442 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:19:08.442 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:19:08.442 14:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:08.442 "params": { 00:19:08.442 "name": "Nvme1", 00:19:08.442 "trtype": "tcp", 00:19:08.442 "traddr": "10.0.0.2", 00:19:08.442 "adrfam": "ipv4", 00:19:08.442 "trsvcid": "4420", 00:19:08.442 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:08.442 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:08.442 "hdgst": false, 00:19:08.442 "ddgst": false 00:19:08.442 }, 00:19:08.442 "method": "bdev_nvme_attach_controller" 00:19:08.442 },{ 00:19:08.442 "params": { 00:19:08.442 "name": "Nvme2", 00:19:08.442 "trtype": "tcp", 00:19:08.442 "traddr": "10.0.0.2", 00:19:08.442 "adrfam": "ipv4", 00:19:08.442 "trsvcid": "4420", 00:19:08.442 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:08.442 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:08.442 "hdgst": false, 00:19:08.442 "ddgst": false 00:19:08.442 }, 00:19:08.442 "method": "bdev_nvme_attach_controller" 00:19:08.442 },{ 00:19:08.442 "params": { 00:19:08.442 "name": "Nvme3", 00:19:08.442 "trtype": "tcp", 00:19:08.442 "traddr": "10.0.0.2", 00:19:08.442 "adrfam": "ipv4", 00:19:08.442 "trsvcid": "4420", 00:19:08.442 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:08.442 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:08.442 "hdgst": false, 00:19:08.442 "ddgst": false 00:19:08.442 }, 00:19:08.442 "method": "bdev_nvme_attach_controller" 00:19:08.442 },{ 00:19:08.442 "params": { 00:19:08.442 "name": "Nvme4", 00:19:08.442 "trtype": "tcp", 00:19:08.442 "traddr": "10.0.0.2", 00:19:08.442 "adrfam": "ipv4", 00:19:08.442 "trsvcid": "4420", 00:19:08.442 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:08.442 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:08.442 "hdgst": false, 00:19:08.442 "ddgst": false 00:19:08.442 }, 00:19:08.442 "method": "bdev_nvme_attach_controller" 00:19:08.442 },{ 00:19:08.442 "params": { 00:19:08.442 "name": "Nvme5", 00:19:08.442 "trtype": "tcp", 00:19:08.442 "traddr": "10.0.0.2", 00:19:08.442 "adrfam": "ipv4", 00:19:08.442 "trsvcid": "4420", 00:19:08.442 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:08.442 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:08.442 "hdgst": false, 00:19:08.442 "ddgst": false 00:19:08.442 }, 00:19:08.442 "method": "bdev_nvme_attach_controller" 00:19:08.442 },{ 00:19:08.442 "params": { 00:19:08.442 "name": "Nvme6", 00:19:08.442 "trtype": "tcp", 00:19:08.442 "traddr": "10.0.0.2", 00:19:08.442 "adrfam": "ipv4", 00:19:08.442 "trsvcid": "4420", 00:19:08.442 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:08.442 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:08.442 "hdgst": false, 00:19:08.442 "ddgst": false 00:19:08.442 }, 00:19:08.442 "method": "bdev_nvme_attach_controller" 00:19:08.442 },{ 00:19:08.442 "params": { 00:19:08.442 "name": "Nvme7", 00:19:08.442 "trtype": "tcp", 00:19:08.442 "traddr": "10.0.0.2", 00:19:08.442 "adrfam": "ipv4", 00:19:08.442 "trsvcid": "4420", 00:19:08.442 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:08.442 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:08.442 "hdgst": false, 00:19:08.442 "ddgst": false 00:19:08.442 }, 00:19:08.442 "method": "bdev_nvme_attach_controller" 00:19:08.442 },{ 00:19:08.442 "params": { 00:19:08.442 "name": "Nvme8", 00:19:08.442 "trtype": "tcp", 00:19:08.442 "traddr": "10.0.0.2", 00:19:08.442 "adrfam": "ipv4", 00:19:08.442 "trsvcid": "4420", 00:19:08.442 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:08.442 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:08.442 "hdgst": false, 00:19:08.442 "ddgst": false 00:19:08.442 }, 00:19:08.442 "method": "bdev_nvme_attach_controller" 00:19:08.442 },{ 00:19:08.442 "params": { 00:19:08.442 "name": "Nvme9", 00:19:08.442 "trtype": "tcp", 00:19:08.442 "traddr": "10.0.0.2", 00:19:08.442 "adrfam": "ipv4", 00:19:08.442 "trsvcid": "4420", 00:19:08.442 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:08.442 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:08.442 "hdgst": false, 00:19:08.442 "ddgst": false 00:19:08.442 }, 00:19:08.442 "method": "bdev_nvme_attach_controller" 00:19:08.442 },{ 00:19:08.442 "params": { 00:19:08.442 "name": "Nvme10", 00:19:08.442 "trtype": "tcp", 00:19:08.442 "traddr": "10.0.0.2", 00:19:08.442 "adrfam": "ipv4", 00:19:08.442 "trsvcid": "4420", 00:19:08.442 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:08.442 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:08.442 "hdgst": false, 00:19:08.442 "ddgst": false 00:19:08.442 }, 00:19:08.442 "method": "bdev_nvme_attach_controller" 00:19:08.442 }' 00:19:08.702 [2024-11-06 14:02:47.743589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.702 [2024-11-06 14:02:47.774029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.083 Running I/O for 10 seconds... 00:19:10.343 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:10.343 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:19:10.343 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:10.343 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.343 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:10.343 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.343 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:10.343 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:10.343 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:19:10.343 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:19:10.343 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:19:10.344 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:19:10.344 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:19:10.344 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:10.344 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:19:10.344 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.344 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:10.344 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.344 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:19:10.344 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:19:10.344 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:19:10.602 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:19:10.602 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:19:10.602 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:10.602 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.602 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:10.602 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:19:10.602 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.602 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=138 00:19:10.602 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 138 -ge 100 ']' 00:19:10.602 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:19:10.602 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:19:10.602 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:19:10.602 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 919245 00:19:10.602 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 919245 ']' 00:19:10.602 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 919245 00:19:10.602 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:19:10.602 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:10.602 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 919245 00:19:10.861 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:10.861 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:10.861 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 919245' 00:19:10.861 killing process with pid 919245 00:19:10.861 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 919245 00:19:10.861 14:02:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 919245 00:19:10.861 Received shutdown signal, test time was about 0.677479 seconds 00:19:10.861 00:19:10.861 Latency(us) 00:19:10.861 [2024-11-06T13:02:50.145Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.861 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:10.861 Verification LBA range: start 0x0 length 0x400 00:19:10.861 Nvme1n1 : 0.68 388.55 24.28 0.00 0.00 162569.51 1556.48 173015.04 00:19:10.861 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:10.861 Verification LBA range: start 0x0 length 0x400 00:19:10.861 Nvme2n1 : 0.65 302.55 18.91 0.00 0.00 203150.74 1815.89 172141.23 00:19:10.861 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:10.861 Verification LBA range: start 0x0 length 0x400 00:19:10.861 Nvme3n1 : 0.65 294.68 18.42 0.00 0.00 206027.09 18786.99 183500.80 00:19:10.861 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:10.861 Verification LBA range: start 0x0 length 0x400 00:19:10.861 Nvme4n1 : 0.67 383.69 23.98 0.00 0.00 154819.63 15400.96 158160.21 00:19:10.861 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:10.861 Verification LBA range: start 0x0 length 0x400 00:19:10.861 Nvme5n1 : 0.67 380.10 23.76 0.00 0.00 153311.36 15073.28 189617.49 00:19:10.861 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:10.861 Verification LBA range: start 0x0 length 0x400 00:19:10.861 Nvme6n1 : 0.66 291.61 18.23 0.00 0.00 195192.04 14964.05 175636.48 00:19:10.861 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:10.861 Verification LBA range: start 0x0 length 0x400 00:19:10.861 Nvme7n1 : 0.67 381.32 23.83 0.00 0.00 146223.04 13926.40 158160.21 00:19:10.861 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:10.861 Verification LBA range: start 0x0 length 0x400 00:19:10.861 Nvme8n1 : 0.66 385.44 24.09 0.00 0.00 141324.48 12342.61 173015.04 00:19:10.861 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:10.861 Verification LBA range: start 0x0 length 0x400 00:19:10.861 Nvme9n1 : 0.67 287.44 17.97 0.00 0.00 184984.75 17039.36 185248.43 00:19:10.861 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:10.861 Verification LBA range: start 0x0 length 0x400 00:19:10.861 Nvme10n1 : 0.66 290.04 18.13 0.00 0.00 179003.16 15947.09 203598.51 00:19:10.861 [2024-11-06T13:02:50.145Z] =================================================================================================================== 00:19:10.861 [2024-11-06T13:02:50.145Z] Total : 3385.43 211.59 0.00 0.00 169726.18 1556.48 203598.51 00:19:10.861 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:19:11.798 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 918864 00:19:11.798 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:19:11.798 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:19:11.799 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:11.799 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:11.799 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:19:11.799 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:11.799 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:19:11.799 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:11.799 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:19:11.799 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:11.799 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:11.799 rmmod nvme_tcp 00:19:11.799 rmmod nvme_fabrics 00:19:12.058 rmmod nvme_keyring 00:19:12.058 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:12.059 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:19:12.059 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:19:12.059 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 918864 ']' 00:19:12.059 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 918864 00:19:12.059 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 918864 ']' 00:19:12.059 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 918864 00:19:12.059 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:19:12.059 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:12.059 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 918864 00:19:12.059 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:12.059 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:12.059 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 918864' 00:19:12.059 killing process with pid 918864 00:19:12.059 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 918864 00:19:12.059 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 918864 00:19:12.318 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:12.318 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:12.318 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:12.318 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:19:12.318 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:19:12.318 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:19:12.318 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:12.318 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:12.318 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:12.318 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.318 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:12.318 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.224 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:14.224 00:19:14.224 real 0m7.368s 00:19:14.224 user 0m22.002s 00:19:14.224 sys 0m1.001s 00:19:14.224 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:14.224 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:14.224 ************************************ 00:19:14.224 END TEST nvmf_shutdown_tc2 00:19:14.224 ************************************ 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:14.225 ************************************ 00:19:14.225 START TEST nvmf_shutdown_tc3 00:19:14.225 ************************************ 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:14.225 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:14.225 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:14.225 Found net devices under 0000:31:00.0: cvl_0_0 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:14.225 Found net devices under 0000:31:00.1: cvl_0_1 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:14.225 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:14.226 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:14.226 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:14.226 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:14.226 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:14.226 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:14.226 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:14.226 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:14.226 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:14.226 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:14.226 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:14.226 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:14.485 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:14.485 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:14.485 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:14.485 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:14.485 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:14.485 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:14.485 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:14.485 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:14.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:14.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.431 ms 00:19:14.485 00:19:14.485 --- 10.0.0.2 ping statistics --- 00:19:14.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.485 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:19:14.485 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:14.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:14.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:19:14.485 00:19:14.485 --- 10.0.0.1 ping statistics --- 00:19:14.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.485 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:19:14.485 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:14.485 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:19:14.485 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:14.485 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:14.485 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:14.485 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:14.485 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:14.485 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:14.485 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:14.485 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:19:14.485 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:14.485 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:14.485 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:14.485 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=920697 00:19:14.485 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 920697 00:19:14.485 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:14.485 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 920697 ']' 00:19:14.485 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.486 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:14.486 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.486 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:14.486 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:14.745 [2024-11-06 14:02:53.787749] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:19:14.745 [2024-11-06 14:02:53.787798] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:14.745 [2024-11-06 14:02:53.859276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:14.745 [2024-11-06 14:02:53.888913] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:14.745 [2024-11-06 14:02:53.888941] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:14.745 [2024-11-06 14:02:53.888946] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:14.745 [2024-11-06 14:02:53.888951] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:14.745 [2024-11-06 14:02:53.888955] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:14.745 [2024-11-06 14:02:53.890199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.745 [2024-11-06 14:02:53.890353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:14.745 [2024-11-06 14:02:53.890589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:14.745 [2024-11-06 14:02:53.890590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:15.314 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:15.314 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:19:15.314 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:15.314 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:15.314 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:15.314 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:15.314 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:15.315 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.315 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:15.315 [2024-11-06 14:02:54.592229] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:15.315 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.315 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:19:15.315 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:19:15.575 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:15.575 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:15.575 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:15.575 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:15.575 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:15.575 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:15.575 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:15.575 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:15.575 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:15.575 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:15.575 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:15.575 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:15.575 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:15.575 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:15.575 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:15.575 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:15.575 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:15.575 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:15.575 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:15.575 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:15.575 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:15.575 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:15.575 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:15.575 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:19:15.575 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.575 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:15.575 Malloc1 00:19:15.575 [2024-11-06 14:02:54.684051] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:15.575 Malloc2 00:19:15.575 Malloc3 00:19:15.575 Malloc4 00:19:15.575 Malloc5 00:19:15.575 Malloc6 00:19:15.835 Malloc7 00:19:15.835 Malloc8 00:19:15.835 Malloc9 00:19:15.835 Malloc10 00:19:15.835 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.835 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:19:15.835 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:15.835 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:15.835 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=921077 00:19:15.835 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 921077 /var/tmp/bdevperf.sock 00:19:15.835 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 921077 ']' 00:19:15.835 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:15.835 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:15.835 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:15.835 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:15.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:15.835 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:15.835 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:15.835 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:15.835 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:19:15.835 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:19:15.835 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:15.835 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:15.835 { 00:19:15.835 "params": { 00:19:15.835 "name": "Nvme$subsystem", 00:19:15.835 "trtype": "$TEST_TRANSPORT", 00:19:15.835 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:15.835 "adrfam": "ipv4", 00:19:15.835 "trsvcid": "$NVMF_PORT", 00:19:15.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:15.835 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:15.835 "hdgst": ${hdgst:-false}, 00:19:15.835 "ddgst": ${ddgst:-false} 00:19:15.835 }, 00:19:15.835 "method": "bdev_nvme_attach_controller" 00:19:15.835 } 00:19:15.835 EOF 00:19:15.835 )") 00:19:15.835 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:15.835 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:15.835 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:15.835 { 00:19:15.835 "params": { 00:19:15.835 "name": "Nvme$subsystem", 00:19:15.835 "trtype": "$TEST_TRANSPORT", 00:19:15.835 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:15.835 "adrfam": "ipv4", 00:19:15.835 "trsvcid": "$NVMF_PORT", 00:19:15.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:15.835 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:15.835 "hdgst": ${hdgst:-false}, 00:19:15.835 "ddgst": ${ddgst:-false} 00:19:15.835 }, 00:19:15.835 "method": "bdev_nvme_attach_controller" 00:19:15.835 } 00:19:15.835 EOF 00:19:15.835 )") 00:19:15.835 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:15.835 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:15.835 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:15.835 { 00:19:15.835 "params": { 00:19:15.835 "name": "Nvme$subsystem", 00:19:15.835 "trtype": "$TEST_TRANSPORT", 00:19:15.835 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:15.835 "adrfam": "ipv4", 00:19:15.835 "trsvcid": "$NVMF_PORT", 00:19:15.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:15.835 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:15.835 "hdgst": ${hdgst:-false}, 00:19:15.835 "ddgst": ${ddgst:-false} 00:19:15.835 }, 00:19:15.835 "method": "bdev_nvme_attach_controller" 00:19:15.835 } 00:19:15.835 EOF 00:19:15.835 )") 00:19:15.835 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:15.835 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:15.835 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:15.835 { 00:19:15.835 "params": { 00:19:15.835 "name": "Nvme$subsystem", 00:19:15.835 "trtype": "$TEST_TRANSPORT", 00:19:15.835 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:15.835 "adrfam": "ipv4", 00:19:15.835 "trsvcid": "$NVMF_PORT", 00:19:15.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:15.835 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:15.835 "hdgst": ${hdgst:-false}, 00:19:15.836 "ddgst": ${ddgst:-false} 00:19:15.836 }, 00:19:15.836 "method": "bdev_nvme_attach_controller" 00:19:15.836 } 00:19:15.836 EOF 00:19:15.836 )") 00:19:15.836 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:15.836 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:15.836 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:15.836 { 00:19:15.836 "params": { 00:19:15.836 "name": "Nvme$subsystem", 00:19:15.836 "trtype": "$TEST_TRANSPORT", 00:19:15.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:15.836 "adrfam": "ipv4", 00:19:15.836 "trsvcid": "$NVMF_PORT", 00:19:15.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:15.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:15.836 "hdgst": ${hdgst:-false}, 00:19:15.836 "ddgst": ${ddgst:-false} 00:19:15.836 }, 00:19:15.836 "method": "bdev_nvme_attach_controller" 00:19:15.836 } 00:19:15.836 EOF 00:19:15.836 )") 00:19:15.836 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:15.836 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:15.836 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:15.836 { 00:19:15.836 "params": { 00:19:15.836 "name": "Nvme$subsystem", 00:19:15.836 "trtype": "$TEST_TRANSPORT", 00:19:15.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:15.836 "adrfam": "ipv4", 00:19:15.836 "trsvcid": "$NVMF_PORT", 00:19:15.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:15.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:15.836 "hdgst": ${hdgst:-false}, 00:19:15.836 "ddgst": ${ddgst:-false} 00:19:15.836 }, 00:19:15.836 "method": "bdev_nvme_attach_controller" 00:19:15.836 } 00:19:15.836 EOF 00:19:15.836 )") 00:19:15.836 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:15.836 [2024-11-06 14:02:55.096705] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:19:15.836 [2024-11-06 14:02:55.096758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid921077 ] 00:19:15.836 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:15.836 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:15.836 { 00:19:15.836 "params": { 00:19:15.836 "name": "Nvme$subsystem", 00:19:15.836 "trtype": "$TEST_TRANSPORT", 00:19:15.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:15.836 "adrfam": "ipv4", 00:19:15.836 "trsvcid": "$NVMF_PORT", 00:19:15.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:15.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:15.836 "hdgst": ${hdgst:-false}, 00:19:15.836 "ddgst": ${ddgst:-false} 00:19:15.836 }, 00:19:15.836 "method": "bdev_nvme_attach_controller" 00:19:15.836 } 00:19:15.836 EOF 00:19:15.836 )") 00:19:15.836 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:15.836 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:15.836 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:15.836 { 00:19:15.836 "params": { 00:19:15.836 "name": "Nvme$subsystem", 00:19:15.836 "trtype": "$TEST_TRANSPORT", 00:19:15.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:15.836 "adrfam": "ipv4", 00:19:15.836 "trsvcid": "$NVMF_PORT", 00:19:15.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:15.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:15.836 "hdgst": ${hdgst:-false}, 00:19:15.836 "ddgst": ${ddgst:-false} 00:19:15.836 }, 00:19:15.836 "method": "bdev_nvme_attach_controller" 00:19:15.836 } 00:19:15.836 EOF 00:19:15.836 )") 00:19:15.836 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:15.836 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:15.836 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:15.836 { 00:19:15.836 "params": { 00:19:15.836 "name": "Nvme$subsystem", 00:19:15.836 "trtype": "$TEST_TRANSPORT", 00:19:15.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:15.836 "adrfam": "ipv4", 00:19:15.836 "trsvcid": "$NVMF_PORT", 00:19:15.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:15.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:15.836 "hdgst": ${hdgst:-false}, 00:19:15.836 "ddgst": ${ddgst:-false} 00:19:15.836 }, 00:19:15.836 "method": "bdev_nvme_attach_controller" 00:19:15.836 } 00:19:15.836 EOF 00:19:15.836 )") 00:19:15.836 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:15.836 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:15.836 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:15.836 { 00:19:15.836 "params": { 00:19:15.836 "name": "Nvme$subsystem", 00:19:15.836 "trtype": "$TEST_TRANSPORT", 00:19:15.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:15.836 "adrfam": "ipv4", 00:19:15.836 "trsvcid": "$NVMF_PORT", 00:19:15.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:15.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:15.836 "hdgst": ${hdgst:-false}, 00:19:15.836 "ddgst": ${ddgst:-false} 00:19:15.836 }, 00:19:15.836 "method": "bdev_nvme_attach_controller" 00:19:15.836 } 00:19:15.836 EOF 00:19:15.836 )") 00:19:15.836 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:15.836 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:19:16.097 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:19:16.097 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:16.097 "params": { 00:19:16.097 "name": "Nvme1", 00:19:16.097 "trtype": "tcp", 00:19:16.097 "traddr": "10.0.0.2", 00:19:16.097 "adrfam": "ipv4", 00:19:16.097 "trsvcid": "4420", 00:19:16.097 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.097 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:16.097 "hdgst": false, 00:19:16.097 "ddgst": false 00:19:16.097 }, 00:19:16.097 "method": "bdev_nvme_attach_controller" 00:19:16.097 },{ 00:19:16.097 "params": { 00:19:16.097 "name": "Nvme2", 00:19:16.097 "trtype": "tcp", 00:19:16.097 "traddr": "10.0.0.2", 00:19:16.097 "adrfam": "ipv4", 00:19:16.097 "trsvcid": "4420", 00:19:16.097 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:16.097 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:16.097 "hdgst": false, 00:19:16.097 "ddgst": false 00:19:16.097 }, 00:19:16.097 "method": "bdev_nvme_attach_controller" 00:19:16.097 },{ 00:19:16.097 "params": { 00:19:16.097 "name": "Nvme3", 00:19:16.097 "trtype": "tcp", 00:19:16.097 "traddr": "10.0.0.2", 00:19:16.097 "adrfam": "ipv4", 00:19:16.097 "trsvcid": "4420", 00:19:16.097 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:16.097 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:16.097 "hdgst": false, 00:19:16.097 "ddgst": false 00:19:16.097 }, 00:19:16.097 "method": "bdev_nvme_attach_controller" 00:19:16.097 },{ 00:19:16.097 "params": { 00:19:16.097 "name": "Nvme4", 00:19:16.097 "trtype": "tcp", 00:19:16.097 "traddr": "10.0.0.2", 00:19:16.097 "adrfam": "ipv4", 00:19:16.097 "trsvcid": "4420", 00:19:16.097 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:16.097 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:16.097 "hdgst": false, 00:19:16.097 "ddgst": false 00:19:16.097 }, 00:19:16.097 "method": "bdev_nvme_attach_controller" 00:19:16.097 },{ 00:19:16.097 "params": { 00:19:16.097 "name": "Nvme5", 00:19:16.097 "trtype": "tcp", 00:19:16.097 "traddr": "10.0.0.2", 00:19:16.097 "adrfam": "ipv4", 00:19:16.097 "trsvcid": "4420", 00:19:16.097 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:16.097 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:16.097 "hdgst": false, 00:19:16.097 "ddgst": false 00:19:16.097 }, 00:19:16.097 "method": "bdev_nvme_attach_controller" 00:19:16.097 },{ 00:19:16.097 "params": { 00:19:16.097 "name": "Nvme6", 00:19:16.097 "trtype": "tcp", 00:19:16.097 "traddr": "10.0.0.2", 00:19:16.097 "adrfam": "ipv4", 00:19:16.097 "trsvcid": "4420", 00:19:16.097 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:16.097 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:16.097 "hdgst": false, 00:19:16.097 "ddgst": false 00:19:16.097 }, 00:19:16.097 "method": "bdev_nvme_attach_controller" 00:19:16.097 },{ 00:19:16.097 "params": { 00:19:16.097 "name": "Nvme7", 00:19:16.097 "trtype": "tcp", 00:19:16.097 "traddr": "10.0.0.2", 00:19:16.097 "adrfam": "ipv4", 00:19:16.097 "trsvcid": "4420", 00:19:16.097 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:16.097 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:16.097 "hdgst": false, 00:19:16.097 "ddgst": false 00:19:16.097 }, 00:19:16.097 "method": "bdev_nvme_attach_controller" 00:19:16.097 },{ 00:19:16.097 "params": { 00:19:16.097 "name": "Nvme8", 00:19:16.097 "trtype": "tcp", 00:19:16.097 "traddr": "10.0.0.2", 00:19:16.097 "adrfam": "ipv4", 00:19:16.097 "trsvcid": "4420", 00:19:16.097 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:16.097 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:16.097 "hdgst": false, 00:19:16.097 "ddgst": false 00:19:16.097 }, 00:19:16.097 "method": "bdev_nvme_attach_controller" 00:19:16.097 },{ 00:19:16.097 "params": { 00:19:16.097 "name": "Nvme9", 00:19:16.097 "trtype": "tcp", 00:19:16.097 "traddr": "10.0.0.2", 00:19:16.097 "adrfam": "ipv4", 00:19:16.097 "trsvcid": "4420", 00:19:16.098 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:16.098 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:16.098 "hdgst": false, 00:19:16.098 "ddgst": false 00:19:16.098 }, 00:19:16.098 "method": "bdev_nvme_attach_controller" 00:19:16.098 },{ 00:19:16.098 "params": { 00:19:16.098 "name": "Nvme10", 00:19:16.098 "trtype": "tcp", 00:19:16.098 "traddr": "10.0.0.2", 00:19:16.098 "adrfam": "ipv4", 00:19:16.098 "trsvcid": "4420", 00:19:16.098 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:16.098 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:16.098 "hdgst": false, 00:19:16.098 "ddgst": false 00:19:16.098 }, 00:19:16.098 "method": "bdev_nvme_attach_controller" 00:19:16.098 }' 00:19:16.098 [2024-11-06 14:02:55.162384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.098 [2024-11-06 14:02:55.192738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.477 Running I/O for 10 seconds... 00:19:17.736 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:17.736 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:19:17.736 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:17.736 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.736 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:17.736 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.736 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:17.736 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:17.736 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:17.736 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:19:17.736 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:19:17.736 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:19:17.736 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:19:17.736 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:19:17.736 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:17.736 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:19:17.736 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.736 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:17.736 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.736 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:19:17.736 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:19:17.736 14:02:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:19:17.994 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:19:17.994 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:19:17.994 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:17.994 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:19:17.994 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.994 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:17.994 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.994 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:19:17.994 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:19:17.994 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:19:17.994 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:19:17.995 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:19:17.995 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 920697 00:19:17.995 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 920697 ']' 00:19:17.995 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 920697 00:19:17.995 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:19:17.995 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:17.995 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 920697 00:19:18.270 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:18.270 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:18.270 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 920697' 00:19:18.270 killing process with pid 920697 00:19:18.270 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 920697 00:19:18.270 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 920697 00:19:18.270 [2024-11-06 14:02:57.305075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9020 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.305122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9020 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.305130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9020 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.305907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.305936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.305943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.305948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.305953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.305958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.305963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.305967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.305972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.305977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.305982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.305986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.305991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.270 [2024-11-06 14:02:57.306201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.306205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.306210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.306214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.306219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.306223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.306228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2ae0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.307020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e94f0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.307031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e94f0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.307035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e94f0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.307040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e94f0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.307045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e94f0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.307052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e94f0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.307057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e94f0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.307061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e94f0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.307065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e94f0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.307070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e94f0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.307075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e94f0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.307079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e94f0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.307084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e94f0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.308394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e99c0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.309099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.309121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.271 [2024-11-06 14:02:57.309127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9eb0 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.309985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.310016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.310024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.310033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.310041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.310050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.310058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.310066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.310074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.310082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.310090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.310098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.310107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.310114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.310123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.310131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.310139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.310147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.310154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.310163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.310170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.310178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.310186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.272 [2024-11-06 14:02:57.310195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.310517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea380 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.273 [2024-11-06 14:02:57.311615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.311620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.311627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.311632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.311636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.311641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.311645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.311650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.311655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.311659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.311663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.311668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.311673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.311677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.311682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.311686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.311691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.311695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea700 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.312445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eabd0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.312459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eabd0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.312464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eabd0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.312469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eabd0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.312474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eabd0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.312478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eabd0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.312907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.312920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.312925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.312931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.312936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.312944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.312949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.312953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.312958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.312962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.312967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.312972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.312976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.312980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.312985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.312990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.312995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.312999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.313004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.313009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.313013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.313018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.313022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.313027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.313031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.313036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.313040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.313045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.313049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.313054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.313058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.313063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.313069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.313073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.313078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.313082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.313087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.274 [2024-11-06 14:02:57.313091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.275 [2024-11-06 14:02:57.313095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.275 [2024-11-06 14:02:57.313100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.275 [2024-11-06 14:02:57.313105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.275 [2024-11-06 14:02:57.313109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.275 [2024-11-06 14:02:57.313113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.275 [2024-11-06 14:02:57.313118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.275 [2024-11-06 14:02:57.313122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.275 [2024-11-06 14:02:57.313127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.275 [2024-11-06 14:02:57.313132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.275 [2024-11-06 14:02:57.313137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.275 [2024-11-06 14:02:57.313142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.275 [2024-11-06 14:02:57.313146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.275 [2024-11-06 14:02:57.313151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.275 [2024-11-06 14:02:57.313155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.275 [2024-11-06 14:02:57.313160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.275 [2024-11-06 14:02:57.313164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.275 [2024-11-06 14:02:57.313169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.275 [2024-11-06 14:02:57.313173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.275 [2024-11-06 14:02:57.313178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.275 [2024-11-06 14:02:57.313182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.275 [2024-11-06 14:02:57.313187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.275 [2024-11-06 14:02:57.313192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.275 [2024-11-06 14:02:57.313197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.275 [2024-11-06 14:02:57.313201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.275 [2024-11-06 14:02:57.313206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb0a0 is same with the state(6) to be set 00:19:18.275 [2024-11-06 14:02:57.313466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.275 [2024-11-06 14:02:57.313493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.275 [2024-11-06 14:02:57.313501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.275 [2024-11-06 14:02:57.313506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.275 [2024-11-06 14:02:57.313513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.275 [2024-11-06 14:02:57.313519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.275 [2024-11-06 14:02:57.313524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.275 [2024-11-06 14:02:57.313530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.275 [2024-11-06 14:02:57.313535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe29ea0 is same with the state(6) to be set 00:19:18.275 [2024-11-06 14:02:57.313559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.275 [2024-11-06 14:02:57.313566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.275 [2024-11-06 14:02:57.313572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.275 [2024-11-06 14:02:57.313577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.275 [2024-11-06 14:02:57.313583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.275 [2024-11-06 14:02:57.313588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.275 [2024-11-06 14:02:57.313594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.275 [2024-11-06 14:02:57.313599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.275 [2024-11-06 14:02:57.313604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2cb00 is same with the state(6) to be set 00:19:18.275 [2024-11-06 14:02:57.313620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.275 [2024-11-06 14:02:57.313626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.275 [2024-11-06 14:02:57.313632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.275 [2024-11-06 14:02:57.313637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.275 [2024-11-06 14:02:57.313649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.275 [2024-11-06 14:02:57.313654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.275 [2024-11-06 14:02:57.313659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.275 [2024-11-06 14:02:57.313665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.275 [2024-11-06 14:02:57.313670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe393e0 is same with the state(6) to be set 00:19:18.275 [2024-11-06 14:02:57.313687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.275 [2024-11-06 14:02:57.313693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.275 [2024-11-06 14:02:57.313699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.275 [2024-11-06 14:02:57.313704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.275 [2024-11-06 14:02:57.313700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with [2024-11-06 14:02:57.313710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsthe state(6) to be set 00:19:18.275 id:0 cdw10:00000000 cdw11:00000000 00:19:18.275 [2024-11-06 14:02:57.313717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.275 [2024-11-06 14:02:57.313719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.275 [2024-11-06 14:02:57.313723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.275 [2024-11-06 14:02:57.313729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.275 [2024-11-06 14:02:57.313729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.275 [2024-11-06 14:02:57.313734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe391e0 is same with the state(6) to be set 00:19:18.275 [2024-11-06 14:02:57.313738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.275 [2024-11-06 14:02:57.313747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.275 [2024-11-06 14:02:57.313752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.275 [2024-11-06 14:02:57.313755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.275 [2024-11-06 14:02:57.313759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.275 [2024-11-06 14:02:57.313763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with [2024-11-06 14:02:57.313766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(6) to be set 00:19:18.275 id:0 cdw10:00000000 cdw11:00000000 00:19:18.275 [2024-11-06 14:02:57.313773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.275 [2024-11-06 14:02:57.313773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with [2024-11-06 14:02:57.313779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsthe state(6) to be set 00:19:18.275 id:0 cdw10:00000000 cdw11:00000000 00:19:18.275 [2024-11-06 14:02:57.313786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.275 [2024-11-06 14:02:57.313787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.313792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.276 [2024-11-06 14:02:57.313796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.313799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.276 [2024-11-06 14:02:57.313805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12afd10 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.313805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.313814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.313822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.313824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.276 [2024-11-06 14:02:57.313831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-06 14:02:57.313830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.276 the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.313839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.276 [2024-11-06 14:02:57.313841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.313845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.276 [2024-11-06 14:02:57.313850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with [2024-11-06 14:02:57.313851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsthe state(6) to be set 00:19:18.276 id:0 cdw10:00000000 cdw11:00000000 00:19:18.276 [2024-11-06 14:02:57.313860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.276 [2024-11-06 14:02:57.313861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.313866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.276 [2024-11-06 14:02:57.313869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with [2024-11-06 14:02:57.313872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:19:18.276 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.276 [2024-11-06 14:02:57.313879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126abd0 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.313879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.313888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.313896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.276 [2024-11-06 14:02:57.313898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.313903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.276 [2024-11-06 14:02:57.313907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.313910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.276 [2024-11-06 14:02:57.313916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.276 [2024-11-06 14:02:57.313916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.313923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.276 [2024-11-06 14:02:57.313925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.313928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.276 [2024-11-06 14:02:57.313935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.276 [2024-11-06 14:02:57.313934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.313941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.276 [2024-11-06 14:02:57.313943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with [2024-11-06 14:02:57.313946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126a7b0 is same the state(6) to be set 00:19:18.276 with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.313953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.313962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.313970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-11-06 14:02:57.313970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with id:0 cdw10:00000000 cdw11:00000000 00:19:18.276 the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.313979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.276 [2024-11-06 14:02:57.313980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.313986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.276 [2024-11-06 14:02:57.313989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.313992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.276 [2024-11-06 14:02:57.313999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.276 [2024-11-06 14:02:57.313998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.314005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.276 [2024-11-06 14:02:57.314009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.314011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.276 [2024-11-06 14:02:57.314018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.276 [2024-11-06 14:02:57.314018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.314024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b0990 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.314027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.314035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.314041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.276 [2024-11-06 14:02:57.314044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.314047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.276 [2024-11-06 14:02:57.314054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.276 [2024-11-06 14:02:57.314053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.314059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.276 [2024-11-06 14:02:57.314062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.314065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.276 [2024-11-06 14:02:57.314072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.276 [2024-11-06 14:02:57.314071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.314078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.276 [2024-11-06 14:02:57.314081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.314084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.276 [2024-11-06 14:02:57.314090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3b230 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.314090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.314099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.314107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.314115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.314123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.314133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.314140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.314147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.314154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.314161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.314168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.276 [2024-11-06 14:02:57.314176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.277 [2024-11-06 14:02:57.314183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.277 [2024-11-06 14:02:57.314191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.277 [2024-11-06 14:02:57.314198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.277 [2024-11-06 14:02:57.314205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.277 [2024-11-06 14:02:57.314213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.277 [2024-11-06 14:02:57.314221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.277 [2024-11-06 14:02:57.314228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.277 [2024-11-06 14:02:57.314236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.277 [2024-11-06 14:02:57.314248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.277 [2024-11-06 14:02:57.314256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb570 is same with the state(6) to be set 00:19:18.277 [2024-11-06 14:02:57.314502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.277 [2024-11-06 14:02:57.314519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.277 [2024-11-06 14:02:57.314531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.277 [2024-11-06 14:02:57.314537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.277 [2024-11-06 14:02:57.314544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.277 [2024-11-06 14:02:57.314549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.277 [2024-11-06 14:02:57.314556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.277 [2024-11-06 14:02:57.314561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.277 [2024-11-06 14:02:57.314568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.277 [2024-11-06 14:02:57.314577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.277 [2024-11-06 14:02:57.314584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.277 [2024-11-06 14:02:57.314589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.277 [2024-11-06 14:02:57.314596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.277 [2024-11-06 14:02:57.314601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.277 [2024-11-06 14:02:57.314608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.277 [2024-11-06 14:02:57.314613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.277 [2024-11-06 14:02:57.314619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.277 [2024-11-06 14:02:57.314624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.277 [2024-11-06 14:02:57.314631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.277 [2024-11-06 14:02:57.314636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.277 [2024-11-06 14:02:57.314643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.277 [2024-11-06 14:02:57.314649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.277 [2024-11-06 14:02:57.314655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.277 [2024-11-06 14:02:57.314661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.277 [2024-11-06 14:02:57.314667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.277 [2024-11-06 14:02:57.314672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.277 [2024-11-06 14:02:57.314679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.277 [2024-11-06 14:02:57.314684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.277 [2024-11-06 14:02:57.314691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.277 [2024-11-06 14:02:57.314696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.277 [2024-11-06 14:02:57.314702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.277 [2024-11-06 14:02:57.314707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.277 [2024-11-06 14:02:57.314714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.277 [2024-11-06 14:02:57.314719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.277 [2024-11-06 14:02:57.314727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.277 [2024-11-06 14:02:57.314732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.277 [2024-11-06 14:02:57.314739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.277 [2024-11-06 14:02:57.314744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.277 [2024-11-06 14:02:57.314750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.277 [2024-11-06 14:02:57.314755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.277 [2024-11-06 14:02:57.314761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.277 [2024-11-06 14:02:57.314767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.277 [2024-11-06 14:02:57.314774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.277 [2024-11-06 14:02:57.314783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.277 [2024-11-06 14:02:57.314790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.277 [2024-11-06 14:02:57.314795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.277 [2024-11-06 14:02:57.314801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.277 [2024-11-06 14:02:57.314807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.277 [2024-11-06 14:02:57.314814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.277 [2024-11-06 14:02:57.314819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.277 [2024-11-06 14:02:57.314825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.277 [2024-11-06 14:02:57.314830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.277 [2024-11-06 14:02:57.314838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.314842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.314849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.314854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.314861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.314866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.314872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.314879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.314885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.314891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.314898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.314903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.314910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.314915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.314921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.314926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.314933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.314938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.314945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.314950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.314956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.314961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.314968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.314973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.314979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.314984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.314991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.314996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.315003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.315007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.315014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.315019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.315027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.315032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.315039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.315044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.315051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.315055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.315062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.315067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.315074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.315078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.315085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.315090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.315096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.315101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.315108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.315113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.315119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.315124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.315130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.315135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.315142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.315147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.315154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.315159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.315165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.315172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.315178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.315183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.315190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.315195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.315202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.315207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.315214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.315219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.315225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.315230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.315237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.315242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.315256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.315262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.315268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.315273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.315279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.315285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.315483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.315496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.278 [2024-11-06 14:02:57.315504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.278 [2024-11-06 14:02:57.315509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.279 [2024-11-06 14:02:57.315517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.279 [2024-11-06 14:02:57.315523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.279 [2024-11-06 14:02:57.315531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.279 [2024-11-06 14:02:57.315537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.279 [2024-11-06 14:02:57.315543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.279 [2024-11-06 14:02:57.315548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.279 [2024-11-06 14:02:57.315555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.279 [2024-11-06 14:02:57.315560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.279 [2024-11-06 14:02:57.315566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.279 [2024-11-06 14:02:57.315572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.279 [2024-11-06 14:02:57.315578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.279 [2024-11-06 14:02:57.315584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.279 [2024-11-06 14:02:57.315590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.279 [2024-11-06 14:02:57.315595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.279 [2024-11-06 14:02:57.315602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.279 [2024-11-06 14:02:57.315607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.279 [2024-11-06 14:02:57.315613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.279 [2024-11-06 14:02:57.315618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.279 [2024-11-06 14:02:57.315625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.279 [2024-11-06 14:02:57.315630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.279 [2024-11-06 14:02:57.315637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.279 [2024-11-06 14:02:57.315642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.279 [2024-11-06 14:02:57.315648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.279 [2024-11-06 14:02:57.315653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.279 [2024-11-06 14:02:57.315660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.279 [2024-11-06 14:02:57.315665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.279 [2024-11-06 14:02:57.315672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.279 [2024-11-06 14:02:57.315678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.279 [2024-11-06 14:02:57.315685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.279 [2024-11-06 14:02:57.315690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.279 [2024-11-06 14:02:57.315696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.279 [2024-11-06 14:02:57.315701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.279 [2024-11-06 14:02:57.315708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.279 [2024-11-06 14:02:57.315713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.279 [2024-11-06 14:02:57.315719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.279 [2024-11-06 14:02:57.315724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.279 [2024-11-06 14:02:57.315731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.279 [2024-11-06 14:02:57.315736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.279 [2024-11-06 14:02:57.315742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.279 [2024-11-06 14:02:57.315748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.279 [2024-11-06 14:02:57.315754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.279 [2024-11-06 14:02:57.315759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.279 [2024-11-06 14:02:57.315766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.279 [2024-11-06 14:02:57.315771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.279 [2024-11-06 14:02:57.315778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.279 [2024-11-06 14:02:57.315783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.279 [2024-11-06 14:02:57.315790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.279 [2024-11-06 14:02:57.315795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.279 [2024-11-06 14:02:57.315801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.279 [2024-11-06 14:02:57.315807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.279 [2024-11-06 14:02:57.315813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.279 [2024-11-06 14:02:57.315818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.279 [2024-11-06 14:02:57.315826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.279 [2024-11-06 14:02:57.315831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.279 [2024-11-06 14:02:57.315837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.279 [2024-11-06 14:02:57.315842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.279 [2024-11-06 14:02:57.315849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.279 [2024-11-06 14:02:57.315854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.279 [2024-11-06 14:02:57.315860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.279 [2024-11-06 14:02:57.315866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.280 [2024-11-06 14:02:57.315872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.280 [2024-11-06 14:02:57.315877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.280 [2024-11-06 14:02:57.315884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.280 [2024-11-06 14:02:57.315889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.280 [2024-11-06 14:02:57.315895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.280 [2024-11-06 14:02:57.315900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.280 [2024-11-06 14:02:57.315906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.280 [2024-11-06 14:02:57.315911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.280 [2024-11-06 14:02:57.315918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.280 [2024-11-06 14:02:57.315923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.280 [2024-11-06 14:02:57.315929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.280 [2024-11-06 14:02:57.315935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.280 [2024-11-06 14:02:57.315941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.280 [2024-11-06 14:02:57.315947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.280 [2024-11-06 14:02:57.315953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.280 [2024-11-06 14:02:57.315958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.280 [2024-11-06 14:02:57.315965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.280 [2024-11-06 14:02:57.315971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.280 [2024-11-06 14:02:57.315978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.280 [2024-11-06 14:02:57.315983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.280 [2024-11-06 14:02:57.315989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.280 [2024-11-06 14:02:57.315994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.280 [2024-11-06 14:02:57.316001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.280 [2024-11-06 14:02:57.316006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.280 [2024-11-06 14:02:57.316012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.280 [2024-11-06 14:02:57.316017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.280 [2024-11-06 14:02:57.316023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.280 [2024-11-06 14:02:57.316028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.280 [2024-11-06 14:02:57.316035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.280 [2024-11-06 14:02:57.316040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.280 [2024-11-06 14:02:57.316046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.280 [2024-11-06 14:02:57.316051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.280 [2024-11-06 14:02:57.316058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.280 [2024-11-06 14:02:57.316063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.280 [2024-11-06 14:02:57.316070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.280 [2024-11-06 14:02:57.316075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.280 [2024-11-06 14:02:57.316081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.280 [2024-11-06 14:02:57.316086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.280 [2024-11-06 14:02:57.316092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.280 [2024-11-06 14:02:57.316098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.280 [2024-11-06 14:02:57.316104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.280 [2024-11-06 14:02:57.316109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.280 [2024-11-06 14:02:57.316120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.280 [2024-11-06 14:02:57.316125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.280 [2024-11-06 14:02:57.316131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.280 [2024-11-06 14:02:57.316136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.280 [2024-11-06 14:02:57.316143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.280 [2024-11-06 14:02:57.316148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.280 [2024-11-06 14:02:57.316156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.280 [2024-11-06 14:02:57.316161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.280 [2024-11-06 14:02:57.316168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.280 [2024-11-06 14:02:57.316173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.280 [2024-11-06 14:02:57.316179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.280 [2024-11-06 14:02:57.316185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.280 [2024-11-06 14:02:57.316191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.280 [2024-11-06 14:02:57.316196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.280 [2024-11-06 14:02:57.316203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.280 [2024-11-06 14:02:57.316208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.280 [2024-11-06 14:02:57.316215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.280 [2024-11-06 14:02:57.316221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.280 [2024-11-06 14:02:57.316227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.280 [2024-11-06 14:02:57.316232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.280 [2024-11-06 14:02:57.316239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.280 [2024-11-06 14:02:57.316249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.280 [2024-11-06 14:02:57.317367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:19:18.280 [2024-11-06 14:02:57.317390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3b230 (9): Bad file descriptor 00:19:18.280 [2024-11-06 14:02:57.318940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:18.280 [2024-11-06 14:02:57.318960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2cb00 (9): Bad file descriptor 00:19:18.280 [2024-11-06 14:02:57.319072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.280 [2024-11-06 14:02:57.319084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.280 [2024-11-06 14:02:57.319093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.280 [2024-11-06 14:02:57.319099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.281 [2024-11-06 14:02:57.319106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.281 [2024-11-06 14:02:57.319111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.281 [2024-11-06 14:02:57.319118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.281 [2024-11-06 14:02:57.319123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.281 [2024-11-06 14:02:57.319130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.281 [2024-11-06 14:02:57.319135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.281 [2024-11-06 14:02:57.319141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048080 is same with the state(6) to be set 00:19:18.281 [2024-11-06 14:02:57.319648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.281 [2024-11-06 14:02:57.319666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe3b230 with addr=10.0.0.2, port=4420 00:19:18.281 [2024-11-06 14:02:57.319673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3b230 is same with the state(6) to be set 00:19:18.281 [2024-11-06 14:02:57.320569] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:18.281 [2024-11-06 14:02:57.320601] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:18.281 [2024-11-06 14:02:57.320630] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:18.281 [2024-11-06 14:02:57.320658] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:18.281 [2024-11-06 14:02:57.320688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.281 [2024-11-06 14:02:57.320695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.281 [2024-11-06 14:02:57.320704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.281 [2024-11-06 14:02:57.320709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.281 [2024-11-06 14:02:57.320715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.281 [2024-11-06 14:02:57.320721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.281 [2024-11-06 14:02:57.320728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.281 [2024-11-06 14:02:57.320734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.281 [2024-11-06 14:02:57.320740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.281 [2024-11-06 14:02:57.320749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.281 [2024-11-06 14:02:57.320755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.281 [2024-11-06 14:02:57.320761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.281 [2024-11-06 14:02:57.320768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.281 [2024-11-06 14:02:57.320773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.281 [2024-11-06 14:02:57.320780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.281 [2024-11-06 14:02:57.320785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.281 [2024-11-06 14:02:57.320792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.281 [2024-11-06 14:02:57.320798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.281 [2024-11-06 14:02:57.320805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.281 [2024-11-06 14:02:57.320810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.281 [2024-11-06 14:02:57.320816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.281 [2024-11-06 14:02:57.320821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.281 [2024-11-06 14:02:57.320828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.281 [2024-11-06 14:02:57.320833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.281 [2024-11-06 14:02:57.320840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.281 [2024-11-06 14:02:57.320845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.281 [2024-11-06 14:02:57.320852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.281 [2024-11-06 14:02:57.320857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.281 [2024-11-06 14:02:57.320864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.281 [2024-11-06 14:02:57.320869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.281 [2024-11-06 14:02:57.320876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.281 [2024-11-06 14:02:57.320881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.281 [2024-11-06 14:02:57.320888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.281 [2024-11-06 14:02:57.320893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.281 [2024-11-06 14:02:57.320903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.281 [2024-11-06 14:02:57.320909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.281 [2024-11-06 14:02:57.320915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.281 [2024-11-06 14:02:57.320921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.281 [2024-11-06 14:02:57.320928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.281 [2024-11-06 14:02:57.320933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.281 [2024-11-06 14:02:57.320940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.281 [2024-11-06 14:02:57.320945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.281 [2024-11-06 14:02:57.320951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.281 [2024-11-06 14:02:57.320957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.281 [2024-11-06 14:02:57.320963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.281 [2024-11-06 14:02:57.320968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.281 [2024-11-06 14:02:57.320975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.281 [2024-11-06 14:02:57.320980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.281 [2024-11-06 14:02:57.320987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.281 [2024-11-06 14:02:57.320992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.281 [2024-11-06 14:02:57.320999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.281 [2024-11-06 14:02:57.321004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.281 [2024-11-06 14:02:57.321011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.281 [2024-11-06 14:02:57.321016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.281 [2024-11-06 14:02:57.321023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.281 [2024-11-06 14:02:57.321028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.281 [2024-11-06 14:02:57.321035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.281 [2024-11-06 14:02:57.321039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.281 [2024-11-06 14:02:57.321046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.281 [2024-11-06 14:02:57.321052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.281 [2024-11-06 14:02:57.321059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.281 [2024-11-06 14:02:57.321064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.281 [2024-11-06 14:02:57.321071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.281 [2024-11-06 14:02:57.321076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.281 [2024-11-06 14:02:57.321083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.281 [2024-11-06 14:02:57.321088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.281 [2024-11-06 14:02:57.321094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.282 [2024-11-06 14:02:57.321099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.282 [2024-11-06 14:02:57.321106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.282 [2024-11-06 14:02:57.321111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.282 [2024-11-06 14:02:57.321117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.282 [2024-11-06 14:02:57.321122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.282 [2024-11-06 14:02:57.321129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.282 [2024-11-06 14:02:57.321134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.282 [2024-11-06 14:02:57.321141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.282 [2024-11-06 14:02:57.321146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.282 [2024-11-06 14:02:57.321153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.282 [2024-11-06 14:02:57.321158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.282 [2024-11-06 14:02:57.321164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.282 [2024-11-06 14:02:57.321169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.282 [2024-11-06 14:02:57.321176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.282 [2024-11-06 14:02:57.321181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.282 [2024-11-06 14:02:57.321187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.282 [2024-11-06 14:02:57.321193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.282 [2024-11-06 14:02:57.321200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.282 [2024-11-06 14:02:57.321206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.282 [2024-11-06 14:02:57.321212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.282 [2024-11-06 14:02:57.321217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.282 [2024-11-06 14:02:57.321224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.282 [2024-11-06 14:02:57.321229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.282 [2024-11-06 14:02:57.321236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.282 [2024-11-06 14:02:57.321241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.282 [2024-11-06 14:02:57.321253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.282 [2024-11-06 14:02:57.321258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.282 [2024-11-06 14:02:57.321265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.282 [2024-11-06 14:02:57.321270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.282 [2024-11-06 14:02:57.321277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.282 [2024-11-06 14:02:57.321282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.282 [2024-11-06 14:02:57.321289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.282 [2024-11-06 14:02:57.321294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.282 [2024-11-06 14:02:57.321300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.282 [2024-11-06 14:02:57.321306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.282 [2024-11-06 14:02:57.321313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.282 [2024-11-06 14:02:57.321318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.282 [2024-11-06 14:02:57.321324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.282 [2024-11-06 14:02:57.321330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.282 [2024-11-06 14:02:57.321336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.282 [2024-11-06 14:02:57.321341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.282 [2024-11-06 14:02:57.321348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.282 [2024-11-06 14:02:57.321354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.282 [2024-11-06 14:02:57.321361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.282 [2024-11-06 14:02:57.321366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.282 [2024-11-06 14:02:57.321372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.282 [2024-11-06 14:02:57.321377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.282 [2024-11-06 14:02:57.321384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.282 [2024-11-06 14:02:57.321389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.282 [2024-11-06 14:02:57.321396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.282 [2024-11-06 14:02:57.321401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.282 [2024-11-06 14:02:57.321408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.282 [2024-11-06 14:02:57.321413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.282 [2024-11-06 14:02:57.321420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.282 [2024-11-06 14:02:57.321425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.282 [2024-11-06 14:02:57.321432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.282 [2024-11-06 14:02:57.321437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.282 [2024-11-06 14:02:57.321443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.282 [2024-11-06 14:02:57.321448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.282 [2024-11-06 14:02:57.321455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.282 [2024-11-06 14:02:57.321460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.282 [2024-11-06 14:02:57.321466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1247dd0 is same with the state(6) to be set 00:19:18.282 [2024-11-06 14:02:57.321527] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:18.282 [2024-11-06 14:02:57.321547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.282 [2024-11-06 14:02:57.321553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.282 [2024-11-06 14:02:57.321562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.321567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.321574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.321581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.321587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.321592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.321599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.321604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.321611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.321616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.321622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.321627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.321634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.321639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.321646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.321651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.321658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.321663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.321669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.321674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.321681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.321686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.321693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.321698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.321704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.321710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.321716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.321721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.321729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.321734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.321741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.321746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.321752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.321757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.321764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.321769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.321775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.321780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.321787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.321792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.321798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.321804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.321810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.321815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.321822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.321827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.321833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.321838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.321845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.321850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.321857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.321862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.321868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.321874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.321881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.321886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.321892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.321901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.321907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.321912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.321919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.321924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.321931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.321936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.321942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.321947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.321954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.321959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.321965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.321971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.321977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.321983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.321989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.321994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.322001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.283 [2024-11-06 14:02:57.322006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.283 [2024-11-06 14:02:57.322012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.284 [2024-11-06 14:02:57.322017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.284 [2024-11-06 14:02:57.322025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.284 [2024-11-06 14:02:57.322030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.284 [2024-11-06 14:02:57.322037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.284 [2024-11-06 14:02:57.322042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.284 [2024-11-06 14:02:57.322049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.284 [2024-11-06 14:02:57.322054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.284 [2024-11-06 14:02:57.322060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.284 [2024-11-06 14:02:57.322065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.284 [2024-11-06 14:02:57.322072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.284 [2024-11-06 14:02:57.322077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.284 [2024-11-06 14:02:57.322084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.284 [2024-11-06 14:02:57.322088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.284 [2024-11-06 14:02:57.322095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.284 [2024-11-06 14:02:57.322100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.284 [2024-11-06 14:02:57.322107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.284 [2024-11-06 14:02:57.322112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.284 [2024-11-06 14:02:57.322118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.284 [2024-11-06 14:02:57.322123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.284 [2024-11-06 14:02:57.322130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.284 [2024-11-06 14:02:57.322135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.284 [2024-11-06 14:02:57.322141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.284 [2024-11-06 14:02:57.322146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.284 [2024-11-06 14:02:57.322153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.284 [2024-11-06 14:02:57.322158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.284 [2024-11-06 14:02:57.322165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.284 [2024-11-06 14:02:57.322171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.284 [2024-11-06 14:02:57.322177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.284 [2024-11-06 14:02:57.322183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.284 [2024-11-06 14:02:57.322189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.284 [2024-11-06 14:02:57.322194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.284 [2024-11-06 14:02:57.322201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.284 [2024-11-06 14:02:57.322206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.284 [2024-11-06 14:02:57.322212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.284 [2024-11-06 14:02:57.322218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.284 [2024-11-06 14:02:57.322224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.284 [2024-11-06 14:02:57.322229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.284 [2024-11-06 14:02:57.322236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.284 [2024-11-06 14:02:57.322241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.284 [2024-11-06 14:02:57.322252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.284 [2024-11-06 14:02:57.322257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.284 [2024-11-06 14:02:57.322264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.284 [2024-11-06 14:02:57.322269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.284 [2024-11-06 14:02:57.322275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.284 [2024-11-06 14:02:57.322281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.284 [2024-11-06 14:02:57.322288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.284 [2024-11-06 14:02:57.322293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.284 [2024-11-06 14:02:57.322299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.284 [2024-11-06 14:02:57.322304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.284 [2024-11-06 14:02:57.322310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x124a840 is same with the state(6) to be set 00:19:18.284 [2024-11-06 14:02:57.322351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:19:18.284 [2024-11-06 14:02:57.322371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe393e0 (9): Bad file descriptor 00:19:18.284 [2024-11-06 14:02:57.322738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.284 [2024-11-06 14:02:57.322748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe2cb00 with addr=10.0.0.2, port=4420 00:19:18.284 [2024-11-06 14:02:57.322753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2cb00 is same with the state(6) to be set 00:19:18.284 [2024-11-06 14:02:57.322760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3b230 (9): Bad file descriptor 00:19:18.284 [2024-11-06 14:02:57.324587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:19:18.284 [2024-11-06 14:02:57.324601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:19:18.284 [2024-11-06 14:02:57.324610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12afd10 (9): Bad file descriptor 00:19:18.284 [2024-11-06 14:02:57.324618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b0990 (9): Bad file descriptor 00:19:18.284 [2024-11-06 14:02:57.324632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2cb00 (9): Bad file descriptor 00:19:18.284 [2024-11-06 14:02:57.324639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:19:18.284 [2024-11-06 14:02:57.324645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:19:18.284 [2024-11-06 14:02:57.324653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:19:18.284 [2024-11-06 14:02:57.324661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:19:18.284 [2024-11-06 14:02:57.324677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe29ea0 (9): Bad file descriptor 00:19:18.284 [2024-11-06 14:02:57.324690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe391e0 (9): Bad file descriptor 00:19:18.284 [2024-11-06 14:02:57.324704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x126abd0 (9): Bad file descriptor 00:19:18.284 [2024-11-06 14:02:57.324715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x126a7b0 (9): Bad file descriptor 00:19:18.284 [2024-11-06 14:02:57.324738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.284 [2024-11-06 14:02:57.324744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.285 [2024-11-06 14:02:57.324750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.285 [2024-11-06 14:02:57.324756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.285 [2024-11-06 14:02:57.324762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.285 [2024-11-06 14:02:57.324768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.285 [2024-11-06 14:02:57.324774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:18.285 [2024-11-06 14:02:57.324779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.285 [2024-11-06 14:02:57.324784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aff40 is same with the state(6) to be set 00:19:18.285 [2024-11-06 14:02:57.325210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.285 [2024-11-06 14:02:57.325223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe393e0 with addr=10.0.0.2, port=4420 00:19:18.285 [2024-11-06 14:02:57.325229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe393e0 is same with the state(6) to be set 00:19:18.285 [2024-11-06 14:02:57.325242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:18.285 [2024-11-06 14:02:57.325253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:18.285 [2024-11-06 14:02:57.325258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:18.285 [2024-11-06 14:02:57.325264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:18.285 [2024-11-06 14:02:57.325694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.285 [2024-11-06 14:02:57.325703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b0990 with addr=10.0.0.2, port=4420 00:19:18.285 [2024-11-06 14:02:57.325709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b0990 is same with the state(6) to be set 00:19:18.285 [2024-11-06 14:02:57.326023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.285 [2024-11-06 14:02:57.326030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12afd10 with addr=10.0.0.2, port=4420 00:19:18.285 [2024-11-06 14:02:57.326035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12afd10 is same with the state(6) to be set 00:19:18.285 [2024-11-06 14:02:57.326041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe393e0 (9): Bad file descriptor 00:19:18.285 [2024-11-06 14:02:57.326080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b0990 (9): Bad file descriptor 00:19:18.285 [2024-11-06 14:02:57.326087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12afd10 (9): Bad file descriptor 00:19:18.285 [2024-11-06 14:02:57.326093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:19:18.285 [2024-11-06 14:02:57.326097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:19:18.285 [2024-11-06 14:02:57.326102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:19:18.285 [2024-11-06 14:02:57.326107] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:19:18.285 [2024-11-06 14:02:57.326137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:19:18.285 [2024-11-06 14:02:57.326142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:19:18.285 [2024-11-06 14:02:57.326147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:19:18.285 [2024-11-06 14:02:57.326152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:19:18.285 [2024-11-06 14:02:57.326157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:19:18.285 [2024-11-06 14:02:57.326161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:19:18.285 [2024-11-06 14:02:57.326167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:19:18.285 [2024-11-06 14:02:57.326171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:19:18.285 [2024-11-06 14:02:57.329048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:19:18.285 [2024-11-06 14:02:57.329422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.285 [2024-11-06 14:02:57.329436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe3b230 with addr=10.0.0.2, port=4420 00:19:18.285 [2024-11-06 14:02:57.329442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3b230 is same with the state(6) to be set 00:19:18.285 [2024-11-06 14:02:57.329465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3b230 (9): Bad file descriptor 00:19:18.285 [2024-11-06 14:02:57.329488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:19:18.285 [2024-11-06 14:02:57.329493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:19:18.285 [2024-11-06 14:02:57.329498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:19:18.285 [2024-11-06 14:02:57.329503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:19:18.285 [2024-11-06 14:02:57.329756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:18.285 [2024-11-06 14:02:57.330121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.285 [2024-11-06 14:02:57.330130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe2cb00 with addr=10.0.0.2, port=4420 00:19:18.285 [2024-11-06 14:02:57.330135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2cb00 is same with the state(6) to be set 00:19:18.285 [2024-11-06 14:02:57.330158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2cb00 (9): Bad file descriptor 00:19:18.285 [2024-11-06 14:02:57.330180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:18.285 [2024-11-06 14:02:57.330186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:18.285 [2024-11-06 14:02:57.330191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:18.285 [2024-11-06 14:02:57.330195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:18.285 [2024-11-06 14:02:57.334637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12aff40 (9): Bad file descriptor 00:19:18.285 [2024-11-06 14:02:57.334710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.285 [2024-11-06 14:02:57.334718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.285 [2024-11-06 14:02:57.334728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.285 [2024-11-06 14:02:57.334734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.285 [2024-11-06 14:02:57.334741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.285 [2024-11-06 14:02:57.334747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.285 [2024-11-06 14:02:57.334753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.285 [2024-11-06 14:02:57.334758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.285 [2024-11-06 14:02:57.334765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.285 [2024-11-06 14:02:57.334770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.285 [2024-11-06 14:02:57.334780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.285 [2024-11-06 14:02:57.334785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.285 [2024-11-06 14:02:57.334792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.285 [2024-11-06 14:02:57.334797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.285 [2024-11-06 14:02:57.334804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.285 [2024-11-06 14:02:57.334809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.285 [2024-11-06 14:02:57.334816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.285 [2024-11-06 14:02:57.334821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.285 [2024-11-06 14:02:57.334827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.285 [2024-11-06 14:02:57.334833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.285 [2024-11-06 14:02:57.334839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.285 [2024-11-06 14:02:57.334844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.285 [2024-11-06 14:02:57.334851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.285 [2024-11-06 14:02:57.334856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.285 [2024-11-06 14:02:57.334863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.285 [2024-11-06 14:02:57.334868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.285 [2024-11-06 14:02:57.334875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.285 [2024-11-06 14:02:57.334880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.285 [2024-11-06 14:02:57.334886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.285 [2024-11-06 14:02:57.334892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.285 [2024-11-06 14:02:57.334898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.334903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.334910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.334915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.334922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.334928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.334935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.334940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.334947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.334952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.334959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.334964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.334971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.334976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.334983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.334988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.334994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.334999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.335006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.335011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.335018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.335023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.335031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.335036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.335043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.335048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.335055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.335060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.335067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.335072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.335078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.335085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.335091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.335096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.335103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.335108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.335115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.335120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.335126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.335131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.335138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.335143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.335150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.335155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.335161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.335166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.335173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.335178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.335184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.335189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.335196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.335201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.335208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.335213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.335220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.335225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.335235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.335240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.335250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.335255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.335262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.335267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.335273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.335278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.335285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.335290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.335297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.335302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.335309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.335314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.335321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.335326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.335332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.335337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.335344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.335349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.335356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.335361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.286 [2024-11-06 14:02:57.335368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.286 [2024-11-06 14:02:57.335373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.335379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.335386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.335392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.335398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.335404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.335409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.335416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.335421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.335428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.335433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.335440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.335445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.335451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.335456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.335463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.335469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.335475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.335480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.335486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235720 is same with the state(6) to be set 00:19:18.287 [2024-11-06 14:02:57.336443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.336453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.336461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.336467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.336473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.336479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.336485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.336492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.336499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.336505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.336512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.336517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.336524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.336529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.336536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.336541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.336547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.336553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.336559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.336564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.336572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.336577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.336584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.336589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.336596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.336601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.336607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.336612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.336619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.336624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.336631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.336636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.336644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.336649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.336656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.336661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.336668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.336673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.336680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.336685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.336692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.336697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.336703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.336708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.336715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.336720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.336727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.336732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.336739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.336744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.336751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.336756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.336763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.336768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.336775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.336780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.336787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.336793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.336799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.336805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.287 [2024-11-06 14:02:57.336811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.287 [2024-11-06 14:02:57.336816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.288 [2024-11-06 14:02:57.336822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.288 [2024-11-06 14:02:57.336828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.288 [2024-11-06 14:02:57.336834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.288 [2024-11-06 14:02:57.336839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.288 [2024-11-06 14:02:57.336846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.288 [2024-11-06 14:02:57.336851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.288 [2024-11-06 14:02:57.336857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.288 [2024-11-06 14:02:57.336862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.288 [2024-11-06 14:02:57.336869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.288 [2024-11-06 14:02:57.336874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.288 [2024-11-06 14:02:57.336881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.288 [2024-11-06 14:02:57.336886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.288 [2024-11-06 14:02:57.336892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.288 [2024-11-06 14:02:57.336897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.288 [2024-11-06 14:02:57.336904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.288 [2024-11-06 14:02:57.336909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.288 [2024-11-06 14:02:57.336916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.288 [2024-11-06 14:02:57.336921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.288 [2024-11-06 14:02:57.336927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.288 [2024-11-06 14:02:57.336932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.288 [2024-11-06 14:02:57.336940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.288 [2024-11-06 14:02:57.336945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.288 [2024-11-06 14:02:57.336952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.288 [2024-11-06 14:02:57.336957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.288 [2024-11-06 14:02:57.336963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.288 [2024-11-06 14:02:57.336969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.288 [2024-11-06 14:02:57.336975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.288 [2024-11-06 14:02:57.336981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.288 [2024-11-06 14:02:57.336987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.288 [2024-11-06 14:02:57.336992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.288 [2024-11-06 14:02:57.336999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.288 [2024-11-06 14:02:57.337004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.288 [2024-11-06 14:02:57.337010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.288 [2024-11-06 14:02:57.337015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.288 [2024-11-06 14:02:57.337022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.288 [2024-11-06 14:02:57.337027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.288 [2024-11-06 14:02:57.337034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.288 [2024-11-06 14:02:57.337040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.288 [2024-11-06 14:02:57.337046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.288 [2024-11-06 14:02:57.337051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.288 [2024-11-06 14:02:57.337058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.288 [2024-11-06 14:02:57.337063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.288 [2024-11-06 14:02:57.337070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.288 [2024-11-06 14:02:57.337075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.288 [2024-11-06 14:02:57.337081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.288 [2024-11-06 14:02:57.337088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.288 [2024-11-06 14:02:57.337094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.288 [2024-11-06 14:02:57.337099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.288 [2024-11-06 14:02:57.337106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.288 [2024-11-06 14:02:57.337111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.288 [2024-11-06 14:02:57.337117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.288 [2024-11-06 14:02:57.337123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.288 [2024-11-06 14:02:57.337130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.288 [2024-11-06 14:02:57.337135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.288 [2024-11-06 14:02:57.337142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.288 [2024-11-06 14:02:57.337147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.288 [2024-11-06 14:02:57.337153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.288 [2024-11-06 14:02:57.337159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.288 [2024-11-06 14:02:57.337165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.288 [2024-11-06 14:02:57.337170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.288 [2024-11-06 14:02:57.337177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.288 [2024-11-06 14:02:57.337182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.288 [2024-11-06 14:02:57.337188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.288 [2024-11-06 14:02:57.337193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.288 [2024-11-06 14:02:57.337200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.288 [2024-11-06 14:02:57.337205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.288 [2024-11-06 14:02:57.337211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1242890 is same with the state(6) to be set 00:19:18.288 [2024-11-06 14:02:57.338155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.288 [2024-11-06 14:02:57.338165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.288 [2024-11-06 14:02:57.338174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.289 [2024-11-06 14:02:57.338699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.289 [2024-11-06 14:02:57.338705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.338711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.338716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.338723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.338728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.338734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.338740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.338746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.338751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.338758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.338763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.338769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.338775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.338781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.338786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.338793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.338798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.338805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.338809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.338816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.338821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.338828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.338834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.338840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.338846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.338852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.338857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.338864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.338869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.338875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.338881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.338887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.338892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.338899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.338904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.338910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.338916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.338922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.338927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.338933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.338939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.338945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.338951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.338957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.338963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.338969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.338974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.338982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1243de0 is same with the state(6) to be set 00:19:18.290 [2024-11-06 14:02:57.339925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.339936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.339946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.339952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.339960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.339966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.339974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.339980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.339988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.339994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.340002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.340008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.340016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.340022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.340030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.340036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.340044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.340050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.340058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.340064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.340072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.340078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.340086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.340092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.340100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.340108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.340116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.340123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.340130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.340137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.340144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.290 [2024-11-06 14:02:57.340151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.290 [2024-11-06 14:02:57.340158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.291 [2024-11-06 14:02:57.340645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.291 [2024-11-06 14:02:57.340650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.292 [2024-11-06 14:02:57.340656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.292 [2024-11-06 14:02:57.340662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.292 [2024-11-06 14:02:57.340668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.292 [2024-11-06 14:02:57.340673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.292 [2024-11-06 14:02:57.340680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.292 [2024-11-06 14:02:57.340685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.292 [2024-11-06 14:02:57.340692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.292 [2024-11-06 14:02:57.340697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.292 [2024-11-06 14:02:57.340703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.292 [2024-11-06 14:02:57.340708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.292 [2024-11-06 14:02:57.340715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.292 [2024-11-06 14:02:57.340720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.292 [2024-11-06 14:02:57.340727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.292 [2024-11-06 14:02:57.340733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.292 [2024-11-06 14:02:57.340739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.292 [2024-11-06 14:02:57.340744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.292 [2024-11-06 14:02:57.340750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1245330 is same with the state(6) to be set 00:19:18.292 [2024-11-06 14:02:57.341694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:19:18.292 [2024-11-06 14:02:57.341706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:19:18.292 [2024-11-06 14:02:57.341714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:19:18.292 [2024-11-06 14:02:57.341800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:19:18.292 [2024-11-06 14:02:57.341809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:19:18.292 [2024-11-06 14:02:57.342173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.292 [2024-11-06 14:02:57.342183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe391e0 with addr=10.0.0.2, port=4420 00:19:18.292 [2024-11-06 14:02:57.342191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe391e0 is same with the state(6) to be set 00:19:18.292 [2024-11-06 14:02:57.342379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.292 [2024-11-06 14:02:57.342387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe29ea0 with addr=10.0.0.2, port=4420 00:19:18.292 [2024-11-06 14:02:57.342392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe29ea0 is same with the state(6) to be set 00:19:18.292 [2024-11-06 14:02:57.342709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.292 [2024-11-06 14:02:57.342716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x126abd0 with addr=10.0.0.2, port=4420 00:19:18.292 [2024-11-06 14:02:57.342721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126abd0 is same with the state(6) to be set 00:19:18.292 [2024-11-06 14:02:57.343472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:19:18.292 [2024-11-06 14:02:57.343483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:19:18.292 [2024-11-06 14:02:57.343489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:19:18.292 [2024-11-06 14:02:57.343496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:18.292 [2024-11-06 14:02:57.343812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.292 [2024-11-06 14:02:57.343821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x126a7b0 with addr=10.0.0.2, port=4420 00:19:18.292 [2024-11-06 14:02:57.343826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126a7b0 is same with the state(6) to be set 00:19:18.292 [2024-11-06 14:02:57.344020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.292 [2024-11-06 14:02:57.344027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe393e0 with addr=10.0.0.2, port=4420 00:19:18.292 [2024-11-06 14:02:57.344032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe393e0 is same with the state(6) to be set 00:19:18.292 [2024-11-06 14:02:57.344040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe391e0 (9): Bad file descriptor 00:19:18.292 [2024-11-06 14:02:57.344050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe29ea0 (9): Bad file descriptor 00:19:18.292 [2024-11-06 14:02:57.344057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x126abd0 (9): Bad file descriptor 00:19:18.292 [2024-11-06 14:02:57.344401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.292 [2024-11-06 14:02:57.344411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12afd10 with addr=10.0.0.2, port=4420 00:19:18.292 [2024-11-06 14:02:57.344416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12afd10 is same with the state(6) to be set 00:19:18.292 [2024-11-06 14:02:57.344760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.292 [2024-11-06 14:02:57.344769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b0990 with addr=10.0.0.2, port=4420 00:19:18.292 [2024-11-06 14:02:57.344774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b0990 is same with the state(6) to be set 00:19:18.292 [2024-11-06 14:02:57.344829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.292 [2024-11-06 14:02:57.344836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe3b230 with addr=10.0.0.2, port=4420 00:19:18.292 [2024-11-06 14:02:57.344841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3b230 is same with the state(6) to be set 00:19:18.292 [2024-11-06 14:02:57.345066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.292 [2024-11-06 14:02:57.345074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe2cb00 with addr=10.0.0.2, port=4420 00:19:18.292 [2024-11-06 14:02:57.345079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2cb00 is same with the state(6) to be set 00:19:18.292 [2024-11-06 14:02:57.345085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x126a7b0 (9): Bad file descriptor 00:19:18.292 [2024-11-06 14:02:57.345092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe393e0 (9): Bad file descriptor 00:19:18.292 [2024-11-06 14:02:57.345098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:19:18.292 [2024-11-06 14:02:57.345103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:19:18.292 [2024-11-06 14:02:57.345109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:19:18.292 [2024-11-06 14:02:57.345115] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:19:18.292 [2024-11-06 14:02:57.345120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:19:18.292 [2024-11-06 14:02:57.345125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:19:18.292 [2024-11-06 14:02:57.345130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:19:18.292 [2024-11-06 14:02:57.345135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:19:18.292 [2024-11-06 14:02:57.345140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:19:18.292 [2024-11-06 14:02:57.345144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:19:18.292 [2024-11-06 14:02:57.345149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:19:18.292 [2024-11-06 14:02:57.345154] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:19:18.292 [2024-11-06 14:02:57.345187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12afd10 (9): Bad file descriptor 00:19:18.292 [2024-11-06 14:02:57.345197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b0990 (9): Bad file descriptor 00:19:18.292 [2024-11-06 14:02:57.345204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3b230 (9): Bad file descriptor 00:19:18.292 [2024-11-06 14:02:57.345210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2cb00 (9): Bad file descriptor 00:19:18.292 [2024-11-06 14:02:57.345216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:19:18.292 [2024-11-06 14:02:57.345220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:19:18.292 [2024-11-06 14:02:57.345225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:19:18.292 [2024-11-06 14:02:57.345230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:19:18.292 [2024-11-06 14:02:57.345235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:19:18.292 [2024-11-06 14:02:57.345239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:19:18.292 [2024-11-06 14:02:57.345247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:19:18.292 [2024-11-06 14:02:57.345252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:19:18.292 [2024-11-06 14:02:57.345278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:19:18.292 [2024-11-06 14:02:57.345283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:19:18.292 [2024-11-06 14:02:57.345288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:19:18.292 [2024-11-06 14:02:57.345293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:19:18.292 [2024-11-06 14:02:57.345299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:19:18.292 [2024-11-06 14:02:57.345303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:19:18.292 [2024-11-06 14:02:57.345308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:19:18.293 [2024-11-06 14:02:57.345313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:19:18.293 [2024-11-06 14:02:57.345318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:19:18.293 [2024-11-06 14:02:57.345322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:19:18.293 [2024-11-06 14:02:57.345327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:19:18.293 [2024-11-06 14:02:57.345331] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:19:18.293 [2024-11-06 14:02:57.345336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:18.293 [2024-11-06 14:02:57.345340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:18.293 [2024-11-06 14:02:57.345346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:18.293 [2024-11-06 14:02:57.345351] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:18.293 [2024-11-06 14:02:57.345376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.293 [2024-11-06 14:02:57.345383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.293 [2024-11-06 14:02:57.345392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.293 [2024-11-06 14:02:57.345398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.293 [2024-11-06 14:02:57.345405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.293 [2024-11-06 14:02:57.345410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.293 [2024-11-06 14:02:57.345416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.293 [2024-11-06 14:02:57.345422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.293 [2024-11-06 14:02:57.345429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.293 [2024-11-06 14:02:57.345434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.293 [2024-11-06 14:02:57.345441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.293 [2024-11-06 14:02:57.345446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.293 [2024-11-06 14:02:57.345453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.293 [2024-11-06 14:02:57.345458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.293 [2024-11-06 14:02:57.345465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.293 [2024-11-06 14:02:57.345470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.293 [2024-11-06 14:02:57.345477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.293 [2024-11-06 14:02:57.345482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.293 [2024-11-06 14:02:57.345488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.293 [2024-11-06 14:02:57.345494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.293 [2024-11-06 14:02:57.345500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.293 [2024-11-06 14:02:57.345505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.293 [2024-11-06 14:02:57.345512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.293 [2024-11-06 14:02:57.345517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.293 [2024-11-06 14:02:57.345523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.293 [2024-11-06 14:02:57.345529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.293 [2024-11-06 14:02:57.345535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.293 [2024-11-06 14:02:57.345541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.293 [2024-11-06 14:02:57.345548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.293 [2024-11-06 14:02:57.345554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.293 [2024-11-06 14:02:57.345560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.293 [2024-11-06 14:02:57.345565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.293 [2024-11-06 14:02:57.345572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.293 [2024-11-06 14:02:57.345577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.293 [2024-11-06 14:02:57.345584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.293 [2024-11-06 14:02:57.345589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.293 [2024-11-06 14:02:57.345595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.293 [2024-11-06 14:02:57.345600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.293 [2024-11-06 14:02:57.345607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.293 [2024-11-06 14:02:57.345612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.293 [2024-11-06 14:02:57.345618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.293 [2024-11-06 14:02:57.345623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.293 [2024-11-06 14:02:57.345630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.293 [2024-11-06 14:02:57.345635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.293 [2024-11-06 14:02:57.345642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.293 [2024-11-06 14:02:57.345647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.293 [2024-11-06 14:02:57.345653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.293 [2024-11-06 14:02:57.345658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.293 [2024-11-06 14:02:57.345664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.293 [2024-11-06 14:02:57.345669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.293 [2024-11-06 14:02:57.345676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.293 [2024-11-06 14:02:57.345681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.293 [2024-11-06 14:02:57.345689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.293 [2024-11-06 14:02:57.345694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.293 [2024-11-06 14:02:57.345701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.293 [2024-11-06 14:02:57.345707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.293 [2024-11-06 14:02:57.345713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.293 [2024-11-06 14:02:57.345718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.293 [2024-11-06 14:02:57.345725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.293 [2024-11-06 14:02:57.345730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.293 [2024-11-06 14:02:57.345737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.293 [2024-11-06 14:02:57.345741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.294 [2024-11-06 14:02:57.345748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.294 [2024-11-06 14:02:57.345753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.294 [2024-11-06 14:02:57.345759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.294 [2024-11-06 14:02:57.345765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.294 [2024-11-06 14:02:57.345771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.294 [2024-11-06 14:02:57.345776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.294 [2024-11-06 14:02:57.345783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.294 [2024-11-06 14:02:57.345788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.294 [2024-11-06 14:02:57.345794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.294 [2024-11-06 14:02:57.345799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.294 [2024-11-06 14:02:57.345806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.294 [2024-11-06 14:02:57.345811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.294 [2024-11-06 14:02:57.345817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.294 [2024-11-06 14:02:57.345822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.294 [2024-11-06 14:02:57.345829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.294 [2024-11-06 14:02:57.345835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.294 [2024-11-06 14:02:57.345842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.294 [2024-11-06 14:02:57.345847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.294 [2024-11-06 14:02:57.345854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.294 [2024-11-06 14:02:57.345859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.294 [2024-11-06 14:02:57.345865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.294 [2024-11-06 14:02:57.345870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.294 [2024-11-06 14:02:57.345877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.294 [2024-11-06 14:02:57.345882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.294 [2024-11-06 14:02:57.345889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.294 [2024-11-06 14:02:57.345894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.294 [2024-11-06 14:02:57.345900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.294 [2024-11-06 14:02:57.345905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.294 [2024-11-06 14:02:57.345912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.294 [2024-11-06 14:02:57.345917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.294 [2024-11-06 14:02:57.345923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.294 [2024-11-06 14:02:57.345928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.294 [2024-11-06 14:02:57.345935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.294 [2024-11-06 14:02:57.345940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.294 [2024-11-06 14:02:57.345946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.294 [2024-11-06 14:02:57.345951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.294 [2024-11-06 14:02:57.345958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.294 [2024-11-06 14:02:57.345963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.294 [2024-11-06 14:02:57.345970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.294 [2024-11-06 14:02:57.345975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.294 [2024-11-06 14:02:57.345982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.294 [2024-11-06 14:02:57.345987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.294 [2024-11-06 14:02:57.345994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.294 [2024-11-06 14:02:57.345999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.294 [2024-11-06 14:02:57.346006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.294 [2024-11-06 14:02:57.346011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.294 [2024-11-06 14:02:57.346017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.294 [2024-11-06 14:02:57.346022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.294 [2024-11-06 14:02:57.346029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.294 [2024-11-06 14:02:57.346035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.294 [2024-11-06 14:02:57.346041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.294 [2024-11-06 14:02:57.346046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.294 [2024-11-06 14:02:57.346053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.294 [2024-11-06 14:02:57.346058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.294 [2024-11-06 14:02:57.346064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.294 [2024-11-06 14:02:57.346070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.294 [2024-11-06 14:02:57.346077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.294 [2024-11-06 14:02:57.346082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.294 [2024-11-06 14:02:57.346088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.294 [2024-11-06 14:02:57.346093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.294 [2024-11-06 14:02:57.346100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.294 [2024-11-06 14:02:57.346105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.294 [2024-11-06 14:02:57.346112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.294 [2024-11-06 14:02:57.346117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.294 [2024-11-06 14:02:57.346123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.294 [2024-11-06 14:02:57.346129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.294 [2024-11-06 14:02:57.346135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1249350 is same with the state(6) to be set 00:19:18.294 task offset: 32896 on job bdev=Nvme7n1 fails 00:19:18.294 00:19:18.294 Latency(us) 00:19:18.294 [2024-11-06T13:02:57.578Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.294 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:18.294 Job: Nvme1n1 ended in about 0.79 seconds with error 00:19:18.294 Verification LBA range: start 0x0 length 0x400 00:19:18.294 Nvme1n1 : 0.79 242.30 15.14 80.77 0.00 196243.81 3290.45 191365.12 00:19:18.294 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:18.294 Job: Nvme2n1 ended in about 0.79 seconds with error 00:19:18.294 Verification LBA range: start 0x0 length 0x400 00:19:18.294 Nvme2n1 : 0.79 322.15 20.13 6.29 0.00 189543.21 16384.00 166898.35 00:19:18.294 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:18.294 Job: Nvme3n1 ended in about 0.81 seconds with error 00:19:18.294 Verification LBA range: start 0x0 length 0x400 00:19:18.294 Nvme3n1 : 0.81 236.90 14.81 78.97 0.00 194271.57 12779.52 180005.55 00:19:18.294 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:18.294 Job: Nvme4n1 ended in about 0.81 seconds with error 00:19:18.294 Verification LBA range: start 0x0 length 0x400 00:19:18.294 Nvme4n1 : 0.81 236.40 14.78 78.80 0.00 191435.52 15947.09 190491.31 00:19:18.294 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:18.294 Job: Nvme5n1 ended in about 0.81 seconds with error 00:19:18.294 Verification LBA range: start 0x0 length 0x400 00:19:18.295 Nvme5n1 : 0.81 235.89 14.74 78.63 0.00 188592.00 15182.51 181753.17 00:19:18.295 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:18.295 Job: Nvme6n1 ended in about 0.82 seconds with error 00:19:18.295 Verification LBA range: start 0x0 length 0x400 00:19:18.295 Nvme6n1 : 0.82 235.38 14.71 78.46 0.00 185779.20 14090.24 180879.36 00:19:18.295 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:18.295 Job: Nvme7n1 ended in about 0.79 seconds with error 00:19:18.295 Verification LBA range: start 0x0 length 0x400 00:19:18.295 Nvme7n1 : 0.79 324.74 20.30 80.87 0.00 140677.42 3454.29 182626.99 00:19:18.295 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:18.295 Job: Nvme8n1 ended in about 0.80 seconds with error 00:19:18.295 Verification LBA range: start 0x0 length 0x400 00:19:18.295 Nvme8n1 : 0.80 320.88 20.06 80.22 0.00 139786.41 11632.64 177384.11 00:19:18.295 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:18.295 Job: Nvme9n1 ended in about 0.82 seconds with error 00:19:18.295 Verification LBA range: start 0x0 length 0x400 00:19:18.295 Nvme9n1 : 0.82 233.84 14.61 77.95 0.00 177331.41 15291.73 186122.24 00:19:18.295 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:18.295 Job: Nvme10n1 ended in about 0.80 seconds with error 00:19:18.295 Verification LBA range: start 0x0 length 0x400 00:19:18.295 Nvme10n1 : 0.80 240.39 15.02 80.13 0.00 168575.36 15728.64 195734.19 00:19:18.295 [2024-11-06T13:02:57.579Z] =================================================================================================================== 00:19:18.295 [2024-11-06T13:02:57.579Z] Total : 2628.87 164.30 721.08 0.00 175475.31 3290.45 195734.19 00:19:18.295 [2024-11-06 14:02:57.366990] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:18.295 [2024-11-06 14:02:57.367036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:19:18.295 [2024-11-06 14:02:57.367367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.295 [2024-11-06 14:02:57.367384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12aff40 with addr=10.0.0.2, port=4420 00:19:18.295 [2024-11-06 14:02:57.367396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aff40 is same with the state(6) to be set 00:19:18.295 [2024-11-06 14:02:57.367694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12aff40 (9): Bad file descriptor 00:19:18.295 [2024-11-06 14:02:57.367885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:19:18.295 [2024-11-06 14:02:57.367896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:19:18.295 [2024-11-06 14:02:57.367903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:19:18.295 [2024-11-06 14:02:57.367909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:19:18.295 [2024-11-06 14:02:57.367915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:19:18.295 [2024-11-06 14:02:57.367943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:19:18.295 [2024-11-06 14:02:57.367949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:19:18.295 [2024-11-06 14:02:57.367955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:19:18.295 [2024-11-06 14:02:57.367962] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:19:18.295 [2024-11-06 14:02:57.367986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:18.295 [2024-11-06 14:02:57.367993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:19:18.295 [2024-11-06 14:02:57.367999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:19:18.295 [2024-11-06 14:02:57.368005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:19:18.295 [2024-11-06 14:02:57.368370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.295 [2024-11-06 14:02:57.368381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x126abd0 with addr=10.0.0.2, port=4420 00:19:18.295 [2024-11-06 14:02:57.368387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126abd0 is same with the state(6) to be set 00:19:18.295 [2024-11-06 14:02:57.368706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.295 [2024-11-06 14:02:57.368715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe29ea0 with addr=10.0.0.2, port=4420 00:19:18.295 [2024-11-06 14:02:57.368720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe29ea0 is same with the state(6) to be set 00:19:18.295 [2024-11-06 14:02:57.369055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.295 [2024-11-06 14:02:57.369062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe391e0 with addr=10.0.0.2, port=4420 00:19:18.295 [2024-11-06 14:02:57.369068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe391e0 is same with the state(6) to be set 00:19:18.295 [2024-11-06 14:02:57.369274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.295 [2024-11-06 14:02:57.369282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe393e0 with addr=10.0.0.2, port=4420 00:19:18.295 [2024-11-06 14:02:57.369287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe393e0 is same with the state(6) to be set 00:19:18.295 [2024-11-06 14:02:57.369558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.295 [2024-11-06 14:02:57.369565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x126a7b0 with addr=10.0.0.2, port=4420 00:19:18.295 [2024-11-06 14:02:57.369574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126a7b0 is same with the state(6) to be set 00:19:18.295 [2024-11-06 14:02:57.369879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.295 [2024-11-06 14:02:57.369888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe2cb00 with addr=10.0.0.2, port=4420 00:19:18.295 [2024-11-06 14:02:57.369894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2cb00 is same with the state(6) to be set 00:19:18.295 [2024-11-06 14:02:57.370233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.295 [2024-11-06 14:02:57.370241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe3b230 with addr=10.0.0.2, port=4420 00:19:18.295 [2024-11-06 14:02:57.370250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3b230 is same with the state(6) to be set 00:19:18.295 [2024-11-06 14:02:57.370555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.295 [2024-11-06 14:02:57.370562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b0990 with addr=10.0.0.2, port=4420 00:19:18.295 [2024-11-06 14:02:57.370568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b0990 is same with the state(6) to be set 00:19:18.295 [2024-11-06 14:02:57.370885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.295 [2024-11-06 14:02:57.370893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12afd10 with addr=10.0.0.2, port=4420 00:19:18.295 [2024-11-06 14:02:57.370898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12afd10 is same with the state(6) to be set 00:19:18.295 [2024-11-06 14:02:57.370905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x126abd0 (9): Bad file descriptor 00:19:18.295 [2024-11-06 14:02:57.370913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe29ea0 (9): Bad file descriptor 00:19:18.295 [2024-11-06 14:02:57.370920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe391e0 (9): Bad file descriptor 00:19:18.295 [2024-11-06 14:02:57.370926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe393e0 (9): Bad file descriptor 00:19:18.295 [2024-11-06 14:02:57.370933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x126a7b0 (9): Bad file descriptor 00:19:18.295 [2024-11-06 14:02:57.370955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2cb00 (9): Bad file descriptor 00:19:18.295 [2024-11-06 14:02:57.370962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3b230 (9): Bad file descriptor 00:19:18.295 [2024-11-06 14:02:57.370969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b0990 (9): Bad file descriptor 00:19:18.295 [2024-11-06 14:02:57.370975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12afd10 (9): Bad file descriptor 00:19:18.295 [2024-11-06 14:02:57.370981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:19:18.295 [2024-11-06 14:02:57.370985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:19:18.295 [2024-11-06 14:02:57.370991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:19:18.295 [2024-11-06 14:02:57.370997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:19:18.295 [2024-11-06 14:02:57.371002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:19:18.295 [2024-11-06 14:02:57.371007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:19:18.295 [2024-11-06 14:02:57.371012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:19:18.295 [2024-11-06 14:02:57.371018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:19:18.295 [2024-11-06 14:02:57.371024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:19:18.295 [2024-11-06 14:02:57.371028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:19:18.295 [2024-11-06 14:02:57.371033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:19:18.295 [2024-11-06 14:02:57.371038] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:19:18.295 [2024-11-06 14:02:57.371043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:19:18.295 [2024-11-06 14:02:57.371047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:19:18.295 [2024-11-06 14:02:57.371052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:19:18.295 [2024-11-06 14:02:57.371057] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:19:18.295 [2024-11-06 14:02:57.371062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:19:18.295 [2024-11-06 14:02:57.371066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:19:18.295 [2024-11-06 14:02:57.371071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:19:18.295 [2024-11-06 14:02:57.371076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:19:18.295 [2024-11-06 14:02:57.371096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:18.296 [2024-11-06 14:02:57.371101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:18.296 [2024-11-06 14:02:57.371106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:18.296 [2024-11-06 14:02:57.371110] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:18.296 [2024-11-06 14:02:57.371116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:19:18.296 [2024-11-06 14:02:57.371120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:19:18.296 [2024-11-06 14:02:57.371125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:19:18.296 [2024-11-06 14:02:57.371130] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:19:18.296 [2024-11-06 14:02:57.371135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:19:18.296 [2024-11-06 14:02:57.371139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:19:18.296 [2024-11-06 14:02:57.371144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:19:18.296 [2024-11-06 14:02:57.371148] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:19:18.296 [2024-11-06 14:02:57.371153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:19:18.296 [2024-11-06 14:02:57.371158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:19:18.296 [2024-11-06 14:02:57.371163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:19:18.296 [2024-11-06 14:02:57.371168] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:19:18.555 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 921077 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 921077 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 921077 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:19.494 rmmod nvme_tcp 00:19:19.494 rmmod nvme_fabrics 00:19:19.494 rmmod nvme_keyring 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 920697 ']' 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 920697 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 920697 ']' 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 920697 00:19:19.494 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (920697) - No such process 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 920697 is not found' 00:19:19.494 Process with pid 920697 is not found 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:19.494 14:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.400 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:21.661 00:19:21.661 real 0m7.202s 00:19:21.661 user 0m17.045s 00:19:21.661 sys 0m0.943s 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:21.661 ************************************ 00:19:21.661 END TEST nvmf_shutdown_tc3 00:19:21.661 ************************************ 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:21.661 ************************************ 00:19:21.661 START TEST nvmf_shutdown_tc4 00:19:21.661 ************************************ 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:21.661 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:21.661 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:21.661 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:21.662 Found net devices under 0000:31:00.0: cvl_0_0 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:21.662 Found net devices under 0000:31:00.1: cvl_0_1 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:21.662 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:21.921 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:21.921 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:21.921 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:21.921 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:21.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:21.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.548 ms 00:19:21.921 00:19:21.921 --- 10.0.0.2 ping statistics --- 00:19:21.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.921 rtt min/avg/max/mdev = 0.548/0.548/0.548/0.000 ms 00:19:21.921 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:21.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:21.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:19:21.921 00:19:21.921 --- 10.0.0.1 ping statistics --- 00:19:21.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.921 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:19:21.921 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:21.921 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:19:21.921 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:21.921 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:21.921 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:21.921 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:21.921 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:21.921 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:21.921 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:21.921 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:19:21.921 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:21.921 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:21.921 14:03:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:19:21.921 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=922597 00:19:21.921 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 922597 00:19:21.921 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 922597 ']' 00:19:21.921 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.921 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:21.921 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.921 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:21.921 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:19:21.921 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:21.921 [2024-11-06 14:03:01.041132] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:19:21.921 [2024-11-06 14:03:01.041179] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:21.921 [2024-11-06 14:03:01.112717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:21.921 [2024-11-06 14:03:01.142390] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:21.921 [2024-11-06 14:03:01.142416] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:21.921 [2024-11-06 14:03:01.142422] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:21.921 [2024-11-06 14:03:01.142426] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:21.921 [2024-11-06 14:03:01.142430] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:21.921 [2024-11-06 14:03:01.143863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.921 [2024-11-06 14:03:01.144016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:21.921 [2024-11-06 14:03:01.144173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.921 [2024-11-06 14:03:01.144175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:19:22.858 [2024-11-06 14:03:01.841738] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.858 14:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:19:22.858 Malloc1 00:19:22.858 [2024-11-06 14:03:01.929195] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:22.858 Malloc2 00:19:22.858 Malloc3 00:19:22.858 Malloc4 00:19:22.858 Malloc5 00:19:22.858 Malloc6 00:19:22.858 Malloc7 00:19:23.118 Malloc8 00:19:23.118 Malloc9 00:19:23.118 Malloc10 00:19:23.118 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.118 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:19:23.118 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:23.118 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:19:23.118 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=923012 00:19:23.118 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:19:23.118 14:03:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:19:23.118 [2024-11-06 14:03:02.364628] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:28.397 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:28.397 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 922597 00:19:28.397 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 922597 ']' 00:19:28.397 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 922597 00:19:28.397 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:19:28.397 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:28.397 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 922597 00:19:28.397 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:28.397 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:28.397 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 922597' 00:19:28.397 killing process with pid 922597 00:19:28.397 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 922597 00:19:28.397 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 922597 00:19:28.397 [2024-11-06 14:03:07.373103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153edd0 is same with the state(6) to be set 00:19:28.397 [2024-11-06 14:03:07.373146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153edd0 is same with the state(6) to be set 00:19:28.397 [2024-11-06 14:03:07.373153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153edd0 is same with the state(6) to be set 00:19:28.397 [2024-11-06 14:03:07.373158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153edd0 is same with the state(6) to be set 00:19:28.397 [2024-11-06 14:03:07.373164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153edd0 is same with the state(6) to be set 00:19:28.397 [2024-11-06 14:03:07.373169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153edd0 is same with the state(6) to be set 00:19:28.397 [2024-11-06 14:03:07.373174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153edd0 is same with the state(6) to be set 00:19:28.397 [2024-11-06 14:03:07.378204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153b350 is same with the state(6) to be set 00:19:28.397 [2024-11-06 14:03:07.378409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153b840 is same with the state(6) to be set 00:19:28.397 [2024-11-06 14:03:07.378437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153b840 is same with the state(6) to be set 00:19:28.397 [2024-11-06 14:03:07.378444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153b840 is same with the state(6) to be set 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 starting I/O failed: -6 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 starting I/O failed: -6 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 starting I/O failed: -6 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 starting I/O failed: -6 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 starting I/O failed: -6 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 starting I/O failed: -6 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 starting I/O failed: -6 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 starting I/O failed: -6 00:19:28.397 [2024-11-06 14:03:07.378865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 starting I/O failed: -6 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 starting I/O failed: -6 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 starting I/O failed: -6 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 [2024-11-06 14:03:07.379050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15516d0 is same with the state(6) to be set 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 [2024-11-06 14:03:07.379074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15516d0 is same with the state(6) to be set 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 [2024-11-06 14:03:07.379082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15516d0 is same with the state(6) to be set 00:19:28.397 starting I/O failed: -6 00:19:28.397 [2024-11-06 14:03:07.379091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15516d0 is same with the state(6) to be set 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 starting I/O failed: -6 00:19:28.397 [2024-11-06 14:03:07.379100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15516d0 is same with the state(6) to be set 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 [2024-11-06 14:03:07.379109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15516d0 is same with the state(6) to be set 00:19:28.397 [2024-11-06 14:03:07.379117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15516d0 is same with Write completed with error (sct=0, sc=8) 00:19:28.397 the state(6) to be set 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 starting I/O failed: -6 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 starting I/O failed: -6 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 starting I/O failed: -6 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 starting I/O failed: -6 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 starting I/O failed: -6 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 starting I/O failed: -6 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 starting I/O failed: -6 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 starting I/O failed: -6 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 starting I/O failed: -6 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 starting I/O failed: -6 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.397 starting I/O failed: -6 00:19:28.397 [2024-11-06 14:03:07.379373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551bc0 is same with the state(6) to be set 00:19:28.397 Write completed with error (sct=0, sc=8) 00:19:28.398 [2024-11-06 14:03:07.379388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551bc0 is same with the state(6) to be set 00:19:28.398 starting I/O failed: -6 00:19:28.398 [2024-11-06 14:03:07.379394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551bc0 is same with the state(6) to be set 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 starting I/O failed: -6 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 starting I/O failed: -6 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 [2024-11-06 14:03:07.379478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:19:28.398 [2024-11-06 14:03:07.379647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15520b0 is same with the state(6) to be set 00:19:28.398 [2024-11-06 14:03:07.379663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15520b0 is same with the state(6) to be set 00:19:28.398 [2024-11-06 14:03:07.379668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15520b0 is same with the state(6) to be set 00:19:28.398 [2024-11-06 14:03:07.379673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15520b0 is same with the state(6) to be set 00:19:28.398 [2024-11-06 14:03:07.379678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15520b0 is same with the state(6) to be set 00:19:28.398 [2024-11-06 14:03:07.379682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15520b0 is same with the state(6) to be set 00:19:28.398 [2024-11-06 14:03:07.379687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15520b0 is same with the state(6) to be set 00:19:28.398 [2024-11-06 14:03:07.379692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15520b0 is same with the state(6) to be set 00:19:28.398 starting I/O failed: -6 00:19:28.398 starting I/O failed: -6 00:19:28.398 starting I/O failed: -6 00:19:28.398 starting I/O failed: -6 00:19:28.398 starting I/O failed: -6 00:19:28.398 starting I/O failed: -6 00:19:28.398 [2024-11-06 14:03:07.379891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15511e0 is same with the state(6) to be set 00:19:28.398 [2024-11-06 14:03:07.379914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15511e0 is same with the state(6) to be set 00:19:28.398 [2024-11-06 14:03:07.379920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15511e0 is same with the state(6) to be set 00:19:28.398 [2024-11-06 14:03:07.379925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15511e0 is same with the state(6) to be set 00:19:28.398 [2024-11-06 14:03:07.379930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15511e0 is same with the state(6) to be set 00:19:28.398 NVMe io qpair process completion error 00:19:28.398 [2024-11-06 14:03:07.379935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15511e0 is same with the state(6) to be set 00:19:28.398 [2024-11-06 14:03:07.379940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15511e0 is same with the state(6) to be set 00:19:28.398 [2024-11-06 14:03:07.379945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15511e0 is same with the state(6) to be set 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 starting I/O failed: -6 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 starting I/O failed: -6 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 starting I/O failed: -6 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 starting I/O failed: -6 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 starting I/O failed: -6 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 starting I/O failed: -6 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 starting I/O failed: -6 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 starting I/O failed: -6 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 starting I/O failed: -6 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 [2024-11-06 14:03:07.380771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:19:28.398 starting I/O failed: -6 00:19:28.398 starting I/O failed: -6 00:19:28.398 starting I/O failed: -6 00:19:28.398 starting I/O failed: -6 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 starting I/O failed: -6 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 starting I/O failed: -6 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 starting I/O failed: -6 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 starting I/O failed: -6 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 starting I/O failed: -6 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 starting I/O failed: -6 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 [2024-11-06 14:03:07.381221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1525140 is same with starting I/O failed: -6 00:19:28.398 the state(6) to be set 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 [2024-11-06 14:03:07.381237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1525140 is same with the state(6) to be set 00:19:28.398 starting I/O failed: -6 00:19:28.398 [2024-11-06 14:03:07.381251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1525140 is same with the state(6) to be set 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 [2024-11-06 14:03:07.381257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1525140 is same with the state(6) to be set 00:19:28.398 [2024-11-06 14:03:07.381262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1525140 is same with the state(6) to be set 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 [2024-11-06 14:03:07.381267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1525140 is same with the state(6) to be set 00:19:28.398 [2024-11-06 14:03:07.381272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1525140 is same with the state(6) to be set 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 starting I/O failed: -6 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 starting I/O failed: -6 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 starting I/O failed: -6 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 starting I/O failed: -6 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 starting I/O failed: -6 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 starting I/O failed: -6 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 starting I/O failed: -6 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 starting I/O failed: -6 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 [2024-11-06 14:03:07.381462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 starting I/O failed: -6 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 starting I/O failed: -6 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 starting I/O failed: -6 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 starting I/O failed: -6 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 starting I/O failed: -6 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 starting I/O failed: -6 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 starting I/O failed: -6 00:19:28.398 Write completed with error (sct=0, sc=8) 00:19:28.398 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 [2024-11-06 14:03:07.381901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552a70 is same with starting I/O failed: -6 00:19:28.399 the state(6) to be set 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 [2024-11-06 14:03:07.381918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552a70 is same with starting I/O failed: -6 00:19:28.399 the state(6) to be set 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 [2024-11-06 14:03:07.381928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552a70 is same with the state(6) to be set 00:19:28.399 [2024-11-06 14:03:07.381937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552a70 is same with the state(6) to be set 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 [2024-11-06 14:03:07.381945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552a70 is same with the state(6) to be set 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 [2024-11-06 14:03:07.381954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552a70 is same with the state(6) to be set 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 [2024-11-06 14:03:07.381962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552a70 is same with the state(6) to be set 00:19:28.399 starting I/O failed: -6 00:19:28.399 [2024-11-06 14:03:07.381971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552a70 is same with the state(6) to be set 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 [2024-11-06 14:03:07.381980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552a70 is same with the state(6) to be set 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 [2024-11-06 14:03:07.381988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552a70 is same with the state(6) to be set 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 [2024-11-06 14:03:07.382166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 [2024-11-06 14:03:07.382290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552f40 is same with the state(6) to be set 00:19:28.399 starting I/O failed: -6 00:19:28.399 [2024-11-06 14:03:07.382304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552f40 is same with Write completed with error (sct=0, sc=8) 00:19:28.399 the state(6) to be set 00:19:28.399 starting I/O failed: -6 00:19:28.399 [2024-11-06 14:03:07.382310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552f40 is same with the state(6) to be set 00:19:28.399 [2024-11-06 14:03:07.382316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552f40 is same with the state(6) to be set 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 [2024-11-06 14:03:07.382473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1553410 is same with the state(6) to be set 00:19:28.399 starting I/O failed: -6 00:19:28.399 [2024-11-06 14:03:07.382482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1553410 is same with the state(6) to be set 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 [2024-11-06 14:03:07.382487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1553410 is same with the state(6) to be set 00:19:28.399 starting I/O failed: -6 00:19:28.399 [2024-11-06 14:03:07.382492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1553410 is same with the state(6) to be set 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 [2024-11-06 14:03:07.382497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1553410 is same with the state(6) to be set 00:19:28.399 starting I/O failed: -6 00:19:28.399 [2024-11-06 14:03:07.382502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1553410 is same with the state(6) to be set 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.399 starting I/O failed: -6 00:19:28.399 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 [2024-11-06 14:03:07.382678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15525a0 is same with starting I/O failed: -6 00:19:28.400 the state(6) to be set 00:19:28.400 [2024-11-06 14:03:07.382694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15525a0 is same with Write completed with error (sct=0, sc=8) 00:19:28.400 the state(6) to be set 00:19:28.400 starting I/O failed: -6 00:19:28.400 [2024-11-06 14:03:07.382701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15525a0 is same with the state(6) to be set 00:19:28.400 [2024-11-06 14:03:07.382707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15525a0 is same with the state(6) to be set 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 [2024-11-06 14:03:07.382711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15525a0 is same with the state(6) to be set 00:19:28.400 starting I/O failed: -6 00:19:28.400 [2024-11-06 14:03:07.382716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15525a0 is same with the state(6) to be set 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 [2024-11-06 14:03:07.382721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15525a0 is same with the state(6) to be set 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 [2024-11-06 14:03:07.383260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:19:28.400 NVMe io qpair process completion error 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 [2024-11-06 14:03:07.384150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 Write completed with error (sct=0, sc=8) 00:19:28.400 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 [2024-11-06 14:03:07.384804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 [2024-11-06 14:03:07.385487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.401 starting I/O failed: -6 00:19:28.401 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 [2024-11-06 14:03:07.386707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:19:28.402 NVMe io qpair process completion error 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 [2024-11-06 14:03:07.387622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 [2024-11-06 14:03:07.388278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:19:28.402 starting I/O failed: -6 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 starting I/O failed: -6 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.402 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 [2024-11-06 14:03:07.388969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 [2024-11-06 14:03:07.390253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:19:28.403 NVMe io qpair process completion error 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 starting I/O failed: -6 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.403 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 [2024-11-06 14:03:07.391043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 [2024-11-06 14:03:07.391703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 [2024-11-06 14:03:07.392405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.404 Write completed with error (sct=0, sc=8) 00:19:28.404 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 [2024-11-06 14:03:07.393834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:19:28.405 NVMe io qpair process completion error 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 [2024-11-06 14:03:07.394653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 [2024-11-06 14:03:07.395345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.405 Write completed with error (sct=0, sc=8) 00:19:28.405 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 [2024-11-06 14:03:07.396004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 [2024-11-06 14:03:07.397945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:19:28.406 NVMe io qpair process completion error 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 Write completed with error (sct=0, sc=8) 00:19:28.406 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 [2024-11-06 14:03:07.398877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 [2024-11-06 14:03:07.399750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 [2024-11-06 14:03:07.400447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.407 Write completed with error (sct=0, sc=8) 00:19:28.407 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 [2024-11-06 14:03:07.401751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:19:28.408 NVMe io qpair process completion error 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 [2024-11-06 14:03:07.402765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 [2024-11-06 14:03:07.403375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.408 Write completed with error (sct=0, sc=8) 00:19:28.408 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 [2024-11-06 14:03:07.404058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 [2024-11-06 14:03:07.406438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:19:28.409 NVMe io qpair process completion error 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 starting I/O failed: -6 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.409 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 [2024-11-06 14:03:07.407421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 [2024-11-06 14:03:07.408048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 [2024-11-06 14:03:07.408753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.410 starting I/O failed: -6 00:19:28.410 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 [2024-11-06 14:03:07.410610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:19:28.411 NVMe io qpair process completion error 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 [2024-11-06 14:03:07.411434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 [2024-11-06 14:03:07.412095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.411 starting I/O failed: -6 00:19:28.411 Write completed with error (sct=0, sc=8) 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 [2024-11-06 14:03:07.412802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 starting I/O failed: -6 00:19:28.412 [2024-11-06 14:03:07.413918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:19:28.412 NVMe io qpair process completion error 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.412 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Write completed with error (sct=0, sc=8) 00:19:28.413 Initializing NVMe Controllers 00:19:28.413 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:19:28.413 Controller IO queue size 128, less than required. 00:19:28.413 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:28.413 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:19:28.413 Controller IO queue size 128, less than required. 00:19:28.413 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:28.413 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:19:28.413 Controller IO queue size 128, less than required. 00:19:28.413 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:28.413 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:19:28.413 Controller IO queue size 128, less than required. 00:19:28.413 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:28.413 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:19:28.413 Controller IO queue size 128, less than required. 00:19:28.413 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:28.413 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:28.413 Controller IO queue size 128, less than required. 00:19:28.413 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:28.413 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:19:28.413 Controller IO queue size 128, less than required. 00:19:28.413 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:28.413 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:19:28.413 Controller IO queue size 128, less than required. 00:19:28.413 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:28.413 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:19:28.413 Controller IO queue size 128, less than required. 00:19:28.413 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:28.413 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:19:28.413 Controller IO queue size 128, less than required. 00:19:28.413 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:28.413 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:19:28.413 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:19:28.413 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:19:28.413 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:19:28.413 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:19:28.413 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:28.413 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:19:28.413 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:19:28.413 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:19:28.413 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:19:28.413 Initialization complete. Launching workers. 00:19:28.413 ======================================================== 00:19:28.413 Latency(us) 00:19:28.413 Device Information : IOPS MiB/s Average min max 00:19:28.413 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2659.30 114.27 48142.88 452.25 91502.63 00:19:28.413 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2607.36 112.03 49118.74 605.51 103906.12 00:19:28.413 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2634.19 113.19 48517.53 442.95 94846.76 00:19:28.413 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2652.00 113.95 47925.79 440.58 103265.37 00:19:28.413 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2615.73 112.39 48599.69 602.30 83174.37 00:19:28.413 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2603.28 111.86 48843.53 680.53 83725.67 00:19:28.413 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2653.93 114.04 47922.13 577.03 83245.55 00:19:28.413 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2561.63 110.07 49661.88 681.20 84928.39 00:19:28.413 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2597.27 111.60 49001.99 677.13 86876.53 00:19:28.413 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2612.51 112.26 48729.36 623.46 88863.17 00:19:28.413 ======================================================== 00:19:28.413 Total : 26197.19 1125.66 48640.77 440.58 103906.12 00:19:28.413 00:19:28.413 [2024-11-06 14:03:07.418160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59c6c0 is same with the state(6) to be set 00:19:28.413 [2024-11-06 14:03:07.418195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59d380 is same with the state(6) to be set 00:19:28.413 [2024-11-06 14:03:07.418217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59c390 is same with the state(6) to be set 00:19:28.413 [2024-11-06 14:03:07.418239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59c060 is same with the state(6) to be set 00:19:28.413 [2024-11-06 14:03:07.418276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59c9f0 is same with the state(6) to be set 00:19:28.413 [2024-11-06 14:03:07.418298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59e360 is same with the state(6) to be set 00:19:28.413 [2024-11-06 14:03:07.418319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59e540 is same with the state(6) to be set 00:19:28.413 [2024-11-06 14:03:07.418340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59d6b0 is same with the state(6) to be set 00:19:28.413 [2024-11-06 14:03:07.418362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59d050 is same with the state(6) to be set 00:19:28.413 [2024-11-06 14:03:07.418388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59d9e0 is same with the state(6) to be set 00:19:28.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:19:28.413 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:19:29.351 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 923012 00:19:29.351 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:19:29.351 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 923012 00:19:29.351 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:19:29.351 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:29.351 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:19:29.352 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:29.352 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 923012 00:19:29.352 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:19:29.352 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:29.352 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:29.352 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:29.352 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:19:29.352 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:19:29.352 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:29.352 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:29.352 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:19:29.352 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:29.352 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:19:29.352 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:29.352 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:19:29.352 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:29.352 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:29.352 rmmod nvme_tcp 00:19:29.611 rmmod nvme_fabrics 00:19:29.611 rmmod nvme_keyring 00:19:29.611 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:29.611 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:19:29.611 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:19:29.611 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 922597 ']' 00:19:29.611 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 922597 00:19:29.611 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 922597 ']' 00:19:29.611 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 922597 00:19:29.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (922597) - No such process 00:19:29.611 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 922597 is not found' 00:19:29.611 Process with pid 922597 is not found 00:19:29.611 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:29.611 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:29.611 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:29.611 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:19:29.611 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:19:29.611 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:29.611 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:19:29.611 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:29.611 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:29.611 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.611 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:29.611 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.518 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:31.518 00:19:31.518 real 0m10.006s 00:19:31.518 user 0m27.385s 00:19:31.518 sys 0m3.846s 00:19:31.518 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:31.518 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:19:31.518 ************************************ 00:19:31.518 END TEST nvmf_shutdown_tc4 00:19:31.518 ************************************ 00:19:31.518 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:19:31.518 00:19:31.518 real 0m38.450s 00:19:31.518 user 1m36.171s 00:19:31.518 sys 0m10.868s 00:19:31.518 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:31.518 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:31.518 ************************************ 00:19:31.518 END TEST nvmf_shutdown 00:19:31.518 ************************************ 00:19:31.518 14:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:19:31.518 14:03:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:31.518 14:03:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:31.518 14:03:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:31.778 ************************************ 00:19:31.778 START TEST nvmf_nsid 00:19:31.778 ************************************ 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:19:31.778 * Looking for test storage... 00:19:31.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:31.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.778 --rc genhtml_branch_coverage=1 00:19:31.778 --rc genhtml_function_coverage=1 00:19:31.778 --rc genhtml_legend=1 00:19:31.778 --rc geninfo_all_blocks=1 00:19:31.778 --rc geninfo_unexecuted_blocks=1 00:19:31.778 00:19:31.778 ' 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:31.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.778 --rc genhtml_branch_coverage=1 00:19:31.778 --rc genhtml_function_coverage=1 00:19:31.778 --rc genhtml_legend=1 00:19:31.778 --rc geninfo_all_blocks=1 00:19:31.778 --rc geninfo_unexecuted_blocks=1 00:19:31.778 00:19:31.778 ' 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:31.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.778 --rc genhtml_branch_coverage=1 00:19:31.778 --rc genhtml_function_coverage=1 00:19:31.778 --rc genhtml_legend=1 00:19:31.778 --rc geninfo_all_blocks=1 00:19:31.778 --rc geninfo_unexecuted_blocks=1 00:19:31.778 00:19:31.778 ' 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:31.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.778 --rc genhtml_branch_coverage=1 00:19:31.778 --rc genhtml_function_coverage=1 00:19:31.778 --rc genhtml_legend=1 00:19:31.778 --rc geninfo_all_blocks=1 00:19:31.778 --rc geninfo_unexecuted_blocks=1 00:19:31.778 00:19:31.778 ' 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.778 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:19:31.779 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:31.779 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:31.779 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:31.779 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:31.779 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:31.779 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:31.779 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:31.779 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:31.779 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:31.779 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:31.779 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:19:31.779 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:19:31.779 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:19:31.779 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:19:31.779 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:19:31.779 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:19:31.779 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:31.779 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:31.779 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:31.779 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:31.779 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:31.779 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.779 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:31.779 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.779 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:31.779 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:31.779 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:19:31.779 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:37.061 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:37.061 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:37.061 Found net devices under 0000:31:00.0: cvl_0_0 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:37.061 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:37.062 Found net devices under 0000:31:00.1: cvl_0_1 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:37.062 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:37.062 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.476 ms 00:19:37.062 00:19:37.062 --- 10.0.0.2 ping statistics --- 00:19:37.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.062 rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:37.062 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:37.062 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:19:37.062 00:19:37.062 --- 10.0.0.1 ping statistics --- 00:19:37.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.062 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=929143 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 929143 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 929143 ']' 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:37.062 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:19:37.352 [2024-11-06 14:03:16.350768] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:19:37.352 [2024-11-06 14:03:16.350821] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:37.352 [2024-11-06 14:03:16.436077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.352 [2024-11-06 14:03:16.471526] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:37.352 [2024-11-06 14:03:16.471556] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:37.352 [2024-11-06 14:03:16.471564] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:37.352 [2024-11-06 14:03:16.471570] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:37.352 [2024-11-06 14:03:16.471576] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:37.352 [2024-11-06 14:03:16.472146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.352 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:37.352 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:19:37.352 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:37.352 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:37.352 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:37.352 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.352 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:37.352 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=929168 00:19:37.352 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:19:37.353 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:19:37.353 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:19:37.353 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:19:37.353 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:37.353 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:37.353 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.353 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.353 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:37.353 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:37.353 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:37.353 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:37.353 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:37.353 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:19:37.353 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:19:37.353 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=b51b3c1a-27ff-405f-a76d-a184ce2d35e2 00:19:37.353 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:19:37.353 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=b1d93878-da0c-48e9-9f28-5feda5296c44 00:19:37.353 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:19:37.353 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=bcabb858-4d70-4694-9340-c43b114c7734 00:19:37.353 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:19:37.353 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.353 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:37.353 null0 00:19:37.353 null1 00:19:37.353 null2 00:19:37.353 [2024-11-06 14:03:16.616759] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:19:37.353 [2024-11-06 14:03:16.616808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid929168 ] 00:19:37.353 [2024-11-06 14:03:16.620236] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:37.649 [2024-11-06 14:03:16.644437] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:37.649 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.649 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 929168 /var/tmp/tgt2.sock 00:19:37.649 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 929168 ']' 00:19:37.649 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:19:37.649 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:37.649 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:19:37.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:19:37.649 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:37.649 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:37.649 [2024-11-06 14:03:16.693780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.649 [2024-11-06 14:03:16.730455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.649 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:37.649 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:19:37.649 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:19:38.219 [2024-11-06 14:03:17.198713] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:38.219 [2024-11-06 14:03:17.214896] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:19:38.219 nvme0n1 nvme0n2 00:19:38.219 nvme1n1 00:19:38.219 14:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:19:38.219 14:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:19:38.219 14:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:39.599 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:19:39.599 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:19:39.599 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:19:39.599 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:19:39.599 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:19:39.599 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:19:39.599 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:19:39.599 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:19:39.599 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:19:39.599 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:19:39.599 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:19:39.599 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:19:39.599 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:19:40.538 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:19:40.538 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:19:40.538 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:19:40.538 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:19:40.538 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:19:40.538 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid b51b3c1a-27ff-405f-a76d-a184ce2d35e2 00:19:40.538 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:19:40.538 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:19:40.538 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:19:40.538 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:19:40.538 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:19:40.538 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=b51b3c1a27ff405fa76da184ce2d35e2 00:19:40.538 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo B51B3C1A27FF405FA76DA184CE2D35E2 00:19:40.538 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ B51B3C1A27FF405FA76DA184CE2D35E2 == \B\5\1\B\3\C\1\A\2\7\F\F\4\0\5\F\A\7\6\D\A\1\8\4\C\E\2\D\3\5\E\2 ]] 00:19:40.538 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:19:40.538 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:19:40.538 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:19:40.538 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:19:40.538 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:19:40.538 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:19:40.538 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:19:40.538 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid b1d93878-da0c-48e9-9f28-5feda5296c44 00:19:40.538 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:19:40.539 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:19:40.539 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:19:40.539 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:19:40.539 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:19:40.539 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=b1d93878da0c48e99f285feda5296c44 00:19:40.539 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo B1D93878DA0C48E99F285FEDA5296C44 00:19:40.539 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ B1D93878DA0C48E99F285FEDA5296C44 == \B\1\D\9\3\8\7\8\D\A\0\C\4\8\E\9\9\F\2\8\5\F\E\D\A\5\2\9\6\C\4\4 ]] 00:19:40.539 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:19:40.539 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:19:40.539 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:19:40.539 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:19:40.539 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:19:40.539 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:19:40.539 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:19:40.539 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid bcabb858-4d70-4694-9340-c43b114c7734 00:19:40.539 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:19:40.539 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:19:40.539 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:19:40.539 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:19:40.539 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:19:40.539 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=bcabb8584d7046949340c43b114c7734 00:19:40.539 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo BCABB8584D7046949340C43B114C7734 00:19:40.539 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ BCABB8584D7046949340C43B114C7734 == \B\C\A\B\B\8\5\8\4\D\7\0\4\6\9\4\9\3\4\0\C\4\3\B\1\1\4\C\7\7\3\4 ]] 00:19:40.539 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:19:40.798 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:19:40.798 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:19:40.798 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 929168 00:19:40.798 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 929168 ']' 00:19:40.798 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 929168 00:19:40.798 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:19:40.798 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:40.798 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 929168 00:19:40.798 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:40.798 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:40.798 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 929168' 00:19:40.798 killing process with pid 929168 00:19:40.798 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 929168 00:19:40.798 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 929168 00:19:41.057 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:19:41.057 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:41.057 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:19:41.057 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:41.057 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:19:41.057 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:41.057 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:41.057 rmmod nvme_tcp 00:19:41.057 rmmod nvme_fabrics 00:19:41.057 rmmod nvme_keyring 00:19:41.057 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:41.057 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:19:41.057 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:19:41.057 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 929143 ']' 00:19:41.057 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 929143 00:19:41.057 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 929143 ']' 00:19:41.057 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 929143 00:19:41.057 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:19:41.057 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:41.057 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 929143 00:19:41.057 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:41.057 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:41.057 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 929143' 00:19:41.057 killing process with pid 929143 00:19:41.058 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 929143 00:19:41.058 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 929143 00:19:41.318 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:41.318 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:41.318 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:41.318 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:19:41.318 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:19:41.318 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:41.318 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:19:41.318 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:41.318 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:41.318 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.318 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:41.318 14:03:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.226 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:43.226 00:19:43.226 real 0m11.667s 00:19:43.226 user 0m9.198s 00:19:43.226 sys 0m4.876s 00:19:43.226 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:43.226 14:03:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:43.226 ************************************ 00:19:43.226 END TEST nvmf_nsid 00:19:43.226 ************************************ 00:19:43.226 14:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:19:43.226 00:19:43.226 real 11m34.015s 00:19:43.226 user 25m33.507s 00:19:43.226 sys 2m56.416s 00:19:43.226 14:03:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:43.486 14:03:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:43.486 ************************************ 00:19:43.486 END TEST nvmf_target_extra 00:19:43.486 ************************************ 00:19:43.486 14:03:22 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:19:43.486 14:03:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:43.486 14:03:22 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:43.486 14:03:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:43.486 ************************************ 00:19:43.486 START TEST nvmf_host 00:19:43.486 ************************************ 00:19:43.486 14:03:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:19:43.486 * Looking for test storage... 00:19:43.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:19:43.486 14:03:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:43.486 14:03:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:19:43.486 14:03:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:43.486 14:03:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:43.486 14:03:22 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:43.486 14:03:22 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:43.486 14:03:22 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:43.486 14:03:22 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:19:43.486 14:03:22 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:19:43.486 14:03:22 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:19:43.486 14:03:22 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:19:43.486 14:03:22 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:19:43.486 14:03:22 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:19:43.486 14:03:22 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:19:43.486 14:03:22 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:43.486 14:03:22 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:19:43.486 14:03:22 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:19:43.486 14:03:22 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:43.486 14:03:22 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:43.486 14:03:22 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:19:43.486 14:03:22 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:19:43.486 14:03:22 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:43.486 14:03:22 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:19:43.486 14:03:22 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:19:43.486 14:03:22 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:19:43.486 14:03:22 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:19:43.486 14:03:22 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:43.486 14:03:22 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:19:43.486 14:03:22 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:19:43.486 14:03:22 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:43.486 14:03:22 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:43.486 14:03:22 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:19:43.486 14:03:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:43.486 14:03:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:43.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.487 --rc genhtml_branch_coverage=1 00:19:43.487 --rc genhtml_function_coverage=1 00:19:43.487 --rc genhtml_legend=1 00:19:43.487 --rc geninfo_all_blocks=1 00:19:43.487 --rc geninfo_unexecuted_blocks=1 00:19:43.487 00:19:43.487 ' 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:43.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.487 --rc genhtml_branch_coverage=1 00:19:43.487 --rc genhtml_function_coverage=1 00:19:43.487 --rc genhtml_legend=1 00:19:43.487 --rc geninfo_all_blocks=1 00:19:43.487 --rc geninfo_unexecuted_blocks=1 00:19:43.487 00:19:43.487 ' 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:43.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.487 --rc genhtml_branch_coverage=1 00:19:43.487 --rc genhtml_function_coverage=1 00:19:43.487 --rc genhtml_legend=1 00:19:43.487 --rc geninfo_all_blocks=1 00:19:43.487 --rc geninfo_unexecuted_blocks=1 00:19:43.487 00:19:43.487 ' 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:43.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.487 --rc genhtml_branch_coverage=1 00:19:43.487 --rc genhtml_function_coverage=1 00:19:43.487 --rc genhtml_legend=1 00:19:43.487 --rc geninfo_all_blocks=1 00:19:43.487 --rc geninfo_unexecuted_blocks=1 00:19:43.487 00:19:43.487 ' 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:43.487 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.487 ************************************ 00:19:43.487 START TEST nvmf_multicontroller 00:19:43.487 ************************************ 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:43.487 * Looking for test storage... 00:19:43.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:43.487 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:43.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.749 --rc genhtml_branch_coverage=1 00:19:43.749 --rc genhtml_function_coverage=1 00:19:43.749 --rc genhtml_legend=1 00:19:43.749 --rc geninfo_all_blocks=1 00:19:43.749 --rc geninfo_unexecuted_blocks=1 00:19:43.749 00:19:43.749 ' 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:43.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.749 --rc genhtml_branch_coverage=1 00:19:43.749 --rc genhtml_function_coverage=1 00:19:43.749 --rc genhtml_legend=1 00:19:43.749 --rc geninfo_all_blocks=1 00:19:43.749 --rc geninfo_unexecuted_blocks=1 00:19:43.749 00:19:43.749 ' 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:43.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.749 --rc genhtml_branch_coverage=1 00:19:43.749 --rc genhtml_function_coverage=1 00:19:43.749 --rc genhtml_legend=1 00:19:43.749 --rc geninfo_all_blocks=1 00:19:43.749 --rc geninfo_unexecuted_blocks=1 00:19:43.749 00:19:43.749 ' 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:43.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.749 --rc genhtml_branch_coverage=1 00:19:43.749 --rc genhtml_function_coverage=1 00:19:43.749 --rc genhtml_legend=1 00:19:43.749 --rc geninfo_all_blocks=1 00:19:43.749 --rc geninfo_unexecuted_blocks=1 00:19:43.749 00:19:43.749 ' 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:43.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:43.749 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:43.750 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:43.750 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:43.750 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:19:43.750 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:19:43.750 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:43.750 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:19:43.750 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:19:43.750 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:43.750 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:43.750 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:43.750 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:43.750 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:43.750 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.750 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:43.750 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.750 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:43.750 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:43.750 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:19:43.750 14:03:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:49.032 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:49.032 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:19:49.032 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:49.032 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:49.032 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:49.032 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:49.032 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:49.032 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:19:49.032 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:49.032 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:19:49.032 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:19:49.032 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:19:49.032 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:19:49.032 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:19:49.032 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:19:49.032 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:49.032 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:49.032 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:49.032 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:49.032 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:49.032 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:49.032 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:49.032 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:49.032 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:49.032 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:49.032 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:49.032 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:49.032 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:49.032 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:49.032 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:49.032 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:49.032 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:49.032 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:49.032 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:49.032 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:49.032 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:49.033 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:49.033 Found net devices under 0000:31:00.0: cvl_0_0 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:49.033 Found net devices under 0000:31:00.1: cvl_0_1 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:49.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:49.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.582 ms 00:19:49.033 00:19:49.033 --- 10.0.0.2 ping statistics --- 00:19:49.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.033 rtt min/avg/max/mdev = 0.582/0.582/0.582/0.000 ms 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:49.033 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:49.033 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:19:49.033 00:19:49.033 --- 10.0.0.1 ping statistics --- 00:19:49.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.033 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=934291 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 934291 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 934291 ']' 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:49.033 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:49.294 [2024-11-06 14:03:28.330893] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:19:49.294 [2024-11-06 14:03:28.330943] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:49.294 [2024-11-06 14:03:28.402871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:49.294 [2024-11-06 14:03:28.432442] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:49.294 [2024-11-06 14:03:28.432471] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:49.294 [2024-11-06 14:03:28.432477] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:49.294 [2024-11-06 14:03:28.432482] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:49.294 [2024-11-06 14:03:28.432486] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:49.294 [2024-11-06 14:03:28.433599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.294 [2024-11-06 14:03:28.433752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:49.294 [2024-11-06 14:03:28.433754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:49.294 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:49.294 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:19:49.294 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:49.294 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:49.294 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:49.294 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.294 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:49.294 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.294 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:49.294 [2024-11-06 14:03:28.537110] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.294 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.294 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:49.294 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.294 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:49.294 Malloc0 00:19:49.294 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.294 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:49.294 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.294 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:49.554 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:49.555 [2024-11-06 14:03:28.591759] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:49.555 [2024-11-06 14:03:28.599693] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:49.555 Malloc1 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=934501 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 934501 /var/tmp/bdevperf.sock 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 934501 ']' 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:49.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:49.555 14:03:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:50.494 NVMe0n1 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.494 1 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:50.494 request: 00:19:50.494 { 00:19:50.494 "name": "NVMe0", 00:19:50.494 "trtype": "tcp", 00:19:50.494 "traddr": "10.0.0.2", 00:19:50.494 "adrfam": "ipv4", 00:19:50.494 "trsvcid": "4420", 00:19:50.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.494 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:19:50.494 "hostaddr": "10.0.0.1", 00:19:50.494 "prchk_reftag": false, 00:19:50.494 "prchk_guard": false, 00:19:50.494 "hdgst": false, 00:19:50.494 "ddgst": false, 00:19:50.494 "allow_unrecognized_csi": false, 00:19:50.494 "method": "bdev_nvme_attach_controller", 00:19:50.494 "req_id": 1 00:19:50.494 } 00:19:50.494 Got JSON-RPC error response 00:19:50.494 response: 00:19:50.494 { 00:19:50.494 "code": -114, 00:19:50.494 "message": "A controller named NVMe0 already exists with the specified network path" 00:19:50.494 } 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:50.494 request: 00:19:50.494 { 00:19:50.494 "name": "NVMe0", 00:19:50.494 "trtype": "tcp", 00:19:50.494 "traddr": "10.0.0.2", 00:19:50.494 "adrfam": "ipv4", 00:19:50.494 "trsvcid": "4420", 00:19:50.494 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:50.494 "hostaddr": "10.0.0.1", 00:19:50.494 "prchk_reftag": false, 00:19:50.494 "prchk_guard": false, 00:19:50.494 "hdgst": false, 00:19:50.494 "ddgst": false, 00:19:50.494 "allow_unrecognized_csi": false, 00:19:50.494 "method": "bdev_nvme_attach_controller", 00:19:50.494 "req_id": 1 00:19:50.494 } 00:19:50.494 Got JSON-RPC error response 00:19:50.494 response: 00:19:50.494 { 00:19:50.494 "code": -114, 00:19:50.494 "message": "A controller named NVMe0 already exists with the specified network path" 00:19:50.494 } 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:50.494 request: 00:19:50.494 { 00:19:50.494 "name": "NVMe0", 00:19:50.494 "trtype": "tcp", 00:19:50.494 "traddr": "10.0.0.2", 00:19:50.494 "adrfam": "ipv4", 00:19:50.494 "trsvcid": "4420", 00:19:50.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.494 "hostaddr": "10.0.0.1", 00:19:50.494 "prchk_reftag": false, 00:19:50.494 "prchk_guard": false, 00:19:50.494 "hdgst": false, 00:19:50.494 "ddgst": false, 00:19:50.494 "multipath": "disable", 00:19:50.494 "allow_unrecognized_csi": false, 00:19:50.494 "method": "bdev_nvme_attach_controller", 00:19:50.494 "req_id": 1 00:19:50.494 } 00:19:50.494 Got JSON-RPC error response 00:19:50.494 response: 00:19:50.494 { 00:19:50.494 "code": -114, 00:19:50.494 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:19:50.494 } 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:50.494 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:50.495 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:50.495 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:19:50.495 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.495 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:50.495 request: 00:19:50.495 { 00:19:50.495 "name": "NVMe0", 00:19:50.495 "trtype": "tcp", 00:19:50.495 "traddr": "10.0.0.2", 00:19:50.495 "adrfam": "ipv4", 00:19:50.495 "trsvcid": "4420", 00:19:50.495 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.495 "hostaddr": "10.0.0.1", 00:19:50.495 "prchk_reftag": false, 00:19:50.495 "prchk_guard": false, 00:19:50.495 "hdgst": false, 00:19:50.495 "ddgst": false, 00:19:50.495 "multipath": "failover", 00:19:50.495 "allow_unrecognized_csi": false, 00:19:50.495 "method": "bdev_nvme_attach_controller", 00:19:50.495 "req_id": 1 00:19:50.495 } 00:19:50.495 Got JSON-RPC error response 00:19:50.495 response: 00:19:50.495 { 00:19:50.495 "code": -114, 00:19:50.495 "message": "A controller named NVMe0 already exists with the specified network path" 00:19:50.495 } 00:19:50.495 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:50.495 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:19:50.495 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:50.495 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:50.495 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:50.495 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:50.495 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.495 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:50.495 NVMe0n1 00:19:50.495 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.495 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:50.495 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.495 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:50.495 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.495 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:19:50.495 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.495 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:50.754 00:19:50.754 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.754 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:50.754 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:19:50.754 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.754 14:03:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:50.754 14:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.754 14:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:19:50.754 14:03:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:52.136 { 00:19:52.136 "results": [ 00:19:52.136 { 00:19:52.136 "job": "NVMe0n1", 00:19:52.136 "core_mask": "0x1", 00:19:52.136 "workload": "write", 00:19:52.136 "status": "finished", 00:19:52.136 "queue_depth": 128, 00:19:52.136 "io_size": 4096, 00:19:52.136 "runtime": 1.006184, 00:19:52.136 "iops": 27861.70322724273, 00:19:52.136 "mibps": 108.83477823141692, 00:19:52.136 "io_failed": 0, 00:19:52.136 "io_timeout": 0, 00:19:52.136 "avg_latency_us": 4579.966931583077, 00:19:52.136 "min_latency_us": 2962.7733333333335, 00:19:52.136 "max_latency_us": 13052.586666666666 00:19:52.136 } 00:19:52.136 ], 00:19:52.136 "core_count": 1 00:19:52.136 } 00:19:52.136 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:19:52.136 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.136 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:52.136 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.136 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:19:52.136 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 934501 00:19:52.136 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 934501 ']' 00:19:52.136 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 934501 00:19:52.136 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:19:52.136 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:52.136 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 934501 00:19:52.136 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:52.136 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:52.136 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 934501' 00:19:52.136 killing process with pid 934501 00:19:52.136 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 934501 00:19:52.136 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 934501 00:19:52.136 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:52.136 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.136 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:52.136 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.136 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:52.136 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.136 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:52.136 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.136 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:19:52.136 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:52.136 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:19:52.136 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:19:52.136 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:19:52.136 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:19:52.136 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:19:52.137 [2024-11-06 14:03:28.683233] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:19:52.137 [2024-11-06 14:03:28.683299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid934501 ] 00:19:52.137 [2024-11-06 14:03:28.761080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.137 [2024-11-06 14:03:28.797464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.137 [2024-11-06 14:03:29.992917] bdev.c:4897:bdev_name_add: *ERROR*: Bdev name d733c99d-c443-4d0f-96f8-315f11f790c0 already exists 00:19:52.137 [2024-11-06 14:03:29.992946] bdev.c:8106:bdev_register: *ERROR*: Unable to add uuid:d733c99d-c443-4d0f-96f8-315f11f790c0 alias for bdev NVMe1n1 00:19:52.137 [2024-11-06 14:03:29.992955] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:19:52.137 Running I/O for 1 seconds... 00:19:52.137 27858.00 IOPS, 108.82 MiB/s 00:19:52.137 Latency(us) 00:19:52.137 [2024-11-06T13:03:31.421Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.137 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:19:52.137 NVMe0n1 : 1.01 27861.70 108.83 0.00 0.00 4579.97 2962.77 13052.59 00:19:52.137 [2024-11-06T13:03:31.421Z] =================================================================================================================== 00:19:52.137 [2024-11-06T13:03:31.421Z] Total : 27861.70 108.83 0.00 0.00 4579.97 2962.77 13052.59 00:19:52.137 Received shutdown signal, test time was about 1.000000 seconds 00:19:52.137 00:19:52.137 Latency(us) 00:19:52.137 [2024-11-06T13:03:31.421Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.137 [2024-11-06T13:03:31.421Z] =================================================================================================================== 00:19:52.137 [2024-11-06T13:03:31.421Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:52.137 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:19:52.137 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:52.137 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:19:52.137 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:19:52.137 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:52.137 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:19:52.137 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:52.137 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:19:52.137 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:52.137 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:52.137 rmmod nvme_tcp 00:19:52.137 rmmod nvme_fabrics 00:19:52.137 rmmod nvme_keyring 00:19:52.137 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:52.137 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:19:52.137 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:19:52.137 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 934291 ']' 00:19:52.137 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 934291 00:19:52.137 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 934291 ']' 00:19:52.137 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 934291 00:19:52.137 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:19:52.137 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:52.137 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 934291 00:19:52.396 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:52.396 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:52.396 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 934291' 00:19:52.396 killing process with pid 934291 00:19:52.396 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 934291 00:19:52.396 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 934291 00:19:52.396 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:52.396 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:52.396 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:52.396 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:19:52.396 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:19:52.396 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:19:52.396 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:52.397 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:52.397 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:52.397 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.397 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:52.397 14:03:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:54.934 00:19:54.934 real 0m10.911s 00:19:54.934 user 0m13.586s 00:19:54.934 sys 0m4.700s 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:54.934 ************************************ 00:19:54.934 END TEST nvmf_multicontroller 00:19:54.934 ************************************ 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.934 ************************************ 00:19:54.934 START TEST nvmf_aer 00:19:54.934 ************************************ 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:54.934 * Looking for test storage... 00:19:54.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:54.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.934 --rc genhtml_branch_coverage=1 00:19:54.934 --rc genhtml_function_coverage=1 00:19:54.934 --rc genhtml_legend=1 00:19:54.934 --rc geninfo_all_blocks=1 00:19:54.934 --rc geninfo_unexecuted_blocks=1 00:19:54.934 00:19:54.934 ' 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:54.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.934 --rc genhtml_branch_coverage=1 00:19:54.934 --rc genhtml_function_coverage=1 00:19:54.934 --rc genhtml_legend=1 00:19:54.934 --rc geninfo_all_blocks=1 00:19:54.934 --rc geninfo_unexecuted_blocks=1 00:19:54.934 00:19:54.934 ' 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:54.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.934 --rc genhtml_branch_coverage=1 00:19:54.934 --rc genhtml_function_coverage=1 00:19:54.934 --rc genhtml_legend=1 00:19:54.934 --rc geninfo_all_blocks=1 00:19:54.934 --rc geninfo_unexecuted_blocks=1 00:19:54.934 00:19:54.934 ' 00:19:54.934 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:54.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.934 --rc genhtml_branch_coverage=1 00:19:54.934 --rc genhtml_function_coverage=1 00:19:54.935 --rc genhtml_legend=1 00:19:54.935 --rc geninfo_all_blocks=1 00:19:54.935 --rc geninfo_unexecuted_blocks=1 00:19:54.935 00:19:54.935 ' 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:54.935 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:19:54.935 14:03:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:00.208 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:00.208 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:20:00.208 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:00.208 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:00.208 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:00.208 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:00.208 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:00.208 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:20:00.208 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:00.208 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:20:00.208 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:00.209 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:00.209 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:00.209 Found net devices under 0000:31:00.0: cvl_0_0 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:00.209 Found net devices under 0000:31:00.1: cvl_0_1 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:00.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:00.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.587 ms 00:20:00.209 00:20:00.209 --- 10.0.0.2 ping statistics --- 00:20:00.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.209 rtt min/avg/max/mdev = 0.587/0.587/0.587/0.000 ms 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:00.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:00.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:20:00.209 00:20:00.209 --- 10.0.0.1 ping statistics --- 00:20:00.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.209 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=939386 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 939386 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 939386 ']' 00:20:00.209 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.210 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:00.210 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.210 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:00.210 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:00.210 14:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:00.210 [2024-11-06 14:03:39.379919] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:20:00.210 [2024-11-06 14:03:39.379967] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.210 [2024-11-06 14:03:39.463795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:00.470 [2024-11-06 14:03:39.502061] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.470 [2024-11-06 14:03:39.502092] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.470 [2024-11-06 14:03:39.502101] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.470 [2024-11-06 14:03:39.502107] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.470 [2024-11-06 14:03:39.502113] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.470 [2024-11-06 14:03:39.503765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.470 [2024-11-06 14:03:39.503884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:00.470 [2024-11-06 14:03:39.504036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.470 [2024-11-06 14:03:39.504037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:01.039 [2024-11-06 14:03:40.191006] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:01.039 Malloc0 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:01.039 [2024-11-06 14:03:40.245680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:01.039 [ 00:20:01.039 { 00:20:01.039 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:01.039 "subtype": "Discovery", 00:20:01.039 "listen_addresses": [], 00:20:01.039 "allow_any_host": true, 00:20:01.039 "hosts": [] 00:20:01.039 }, 00:20:01.039 { 00:20:01.039 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:01.039 "subtype": "NVMe", 00:20:01.039 "listen_addresses": [ 00:20:01.039 { 00:20:01.039 "trtype": "TCP", 00:20:01.039 "adrfam": "IPv4", 00:20:01.039 "traddr": "10.0.0.2", 00:20:01.039 "trsvcid": "4420" 00:20:01.039 } 00:20:01.039 ], 00:20:01.039 "allow_any_host": true, 00:20:01.039 "hosts": [], 00:20:01.039 "serial_number": "SPDK00000000000001", 00:20:01.039 "model_number": "SPDK bdev Controller", 00:20:01.039 "max_namespaces": 2, 00:20:01.039 "min_cntlid": 1, 00:20:01.039 "max_cntlid": 65519, 00:20:01.039 "namespaces": [ 00:20:01.039 { 00:20:01.039 "nsid": 1, 00:20:01.039 "bdev_name": "Malloc0", 00:20:01.039 "name": "Malloc0", 00:20:01.039 "nguid": "23B0BAABEBA7481CA141A67FF9CDD5C2", 00:20:01.039 "uuid": "23b0baab-eba7-481c-a141-a67ff9cdd5c2" 00:20:01.039 } 00:20:01.039 ] 00:20:01.039 } 00:20:01.039 ] 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=939687 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:01.039 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:20:01.040 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:20:01.298 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:01.298 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:20:01.298 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:20:01.298 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:20:01.298 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:01.298 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:01.298 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:20:01.298 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:01.298 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.298 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:01.298 Malloc1 00:20:01.298 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.298 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:01.298 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.298 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:01.298 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.298 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:01.298 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.298 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:01.298 Asynchronous Event Request test 00:20:01.298 Attaching to 10.0.0.2 00:20:01.298 Attached to 10.0.0.2 00:20:01.298 Registering asynchronous event callbacks... 00:20:01.298 Starting namespace attribute notice tests for all controllers... 00:20:01.298 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:01.298 aer_cb - Changed Namespace 00:20:01.298 Cleaning up... 00:20:01.298 [ 00:20:01.298 { 00:20:01.298 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:01.298 "subtype": "Discovery", 00:20:01.298 "listen_addresses": [], 00:20:01.298 "allow_any_host": true, 00:20:01.298 "hosts": [] 00:20:01.298 }, 00:20:01.298 { 00:20:01.298 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:01.298 "subtype": "NVMe", 00:20:01.298 "listen_addresses": [ 00:20:01.298 { 00:20:01.298 "trtype": "TCP", 00:20:01.298 "adrfam": "IPv4", 00:20:01.298 "traddr": "10.0.0.2", 00:20:01.298 "trsvcid": "4420" 00:20:01.298 } 00:20:01.298 ], 00:20:01.298 "allow_any_host": true, 00:20:01.298 "hosts": [], 00:20:01.298 "serial_number": "SPDK00000000000001", 00:20:01.298 "model_number": "SPDK bdev Controller", 00:20:01.298 "max_namespaces": 2, 00:20:01.298 "min_cntlid": 1, 00:20:01.298 "max_cntlid": 65519, 00:20:01.298 "namespaces": [ 00:20:01.298 { 00:20:01.298 "nsid": 1, 00:20:01.298 "bdev_name": "Malloc0", 00:20:01.298 "name": "Malloc0", 00:20:01.298 "nguid": "23B0BAABEBA7481CA141A67FF9CDD5C2", 00:20:01.298 "uuid": "23b0baab-eba7-481c-a141-a67ff9cdd5c2" 00:20:01.298 }, 00:20:01.298 { 00:20:01.298 "nsid": 2, 00:20:01.298 "bdev_name": "Malloc1", 00:20:01.298 "name": "Malloc1", 00:20:01.298 "nguid": "18EB880783CD49BE95055B8D41B64AD2", 00:20:01.298 "uuid": "18eb8807-83cd-49be-9505-5b8d41b64ad2" 00:20:01.298 } 00:20:01.298 ] 00:20:01.298 } 00:20:01.298 ] 00:20:01.298 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.298 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 939687 00:20:01.298 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:01.298 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.298 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:01.298 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.298 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:01.299 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.299 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:01.299 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.299 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:01.299 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.299 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:01.299 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.299 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:01.299 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:20:01.299 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:01.299 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:20:01.299 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:01.299 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:20:01.299 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:01.299 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:01.299 rmmod nvme_tcp 00:20:01.557 rmmod nvme_fabrics 00:20:01.557 rmmod nvme_keyring 00:20:01.557 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:01.557 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:20:01.557 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:20:01.557 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 939386 ']' 00:20:01.557 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 939386 00:20:01.557 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 939386 ']' 00:20:01.557 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 939386 00:20:01.557 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:20:01.557 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:01.557 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 939386 00:20:01.557 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:01.557 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:01.557 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 939386' 00:20:01.557 killing process with pid 939386 00:20:01.558 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 939386 00:20:01.558 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 939386 00:20:01.558 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:01.558 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:01.558 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:01.558 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:20:01.558 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:20:01.558 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:01.558 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:20:01.558 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:01.558 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:01.558 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.558 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:01.558 14:03:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.093 14:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:04.093 00:20:04.093 real 0m9.177s 00:20:04.093 user 0m6.659s 00:20:04.093 sys 0m4.554s 00:20:04.093 14:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:04.093 14:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:04.093 ************************************ 00:20:04.093 END TEST nvmf_aer 00:20:04.093 ************************************ 00:20:04.093 14:03:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:04.093 14:03:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:04.093 14:03:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:04.093 14:03:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.093 ************************************ 00:20:04.093 START TEST nvmf_async_init 00:20:04.093 ************************************ 00:20:04.093 14:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:04.093 * Looking for test storage... 00:20:04.093 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:04.093 14:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:04.093 14:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:20:04.093 14:03:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:04.093 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:04.093 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:04.093 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:04.093 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:04.093 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:20:04.093 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:20:04.093 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:20:04.093 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:20:04.093 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:20:04.093 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:20:04.093 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:20:04.093 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:04.093 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:20:04.093 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:20:04.093 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:04.093 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:04.093 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:20:04.093 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:20:04.093 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:04.093 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:20:04.093 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:20:04.093 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:20:04.093 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:20:04.093 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:04.093 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:20:04.093 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:20:04.093 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:04.093 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:04.093 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:20:04.093 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:04.093 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:04.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.093 --rc genhtml_branch_coverage=1 00:20:04.093 --rc genhtml_function_coverage=1 00:20:04.093 --rc genhtml_legend=1 00:20:04.093 --rc geninfo_all_blocks=1 00:20:04.093 --rc geninfo_unexecuted_blocks=1 00:20:04.093 00:20:04.093 ' 00:20:04.093 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:04.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.093 --rc genhtml_branch_coverage=1 00:20:04.093 --rc genhtml_function_coverage=1 00:20:04.093 --rc genhtml_legend=1 00:20:04.093 --rc geninfo_all_blocks=1 00:20:04.093 --rc geninfo_unexecuted_blocks=1 00:20:04.093 00:20:04.093 ' 00:20:04.093 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:04.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.093 --rc genhtml_branch_coverage=1 00:20:04.093 --rc genhtml_function_coverage=1 00:20:04.093 --rc genhtml_legend=1 00:20:04.093 --rc geninfo_all_blocks=1 00:20:04.093 --rc geninfo_unexecuted_blocks=1 00:20:04.093 00:20:04.093 ' 00:20:04.093 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:04.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.093 --rc genhtml_branch_coverage=1 00:20:04.093 --rc genhtml_function_coverage=1 00:20:04.093 --rc genhtml_legend=1 00:20:04.093 --rc geninfo_all_blocks=1 00:20:04.093 --rc geninfo_unexecuted_blocks=1 00:20:04.093 00:20:04.093 ' 00:20:04.093 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:04.093 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:20:04.093 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:04.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=3dffe9d197284d008b7435d123b29f6c 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:20:04.094 14:03:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:09.372 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:09.372 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:09.372 Found net devices under 0000:31:00.0: cvl_0_0 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:09.372 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:09.373 Found net devices under 0000:31:00.1: cvl_0_1 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:09.373 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:09.373 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.595 ms 00:20:09.373 00:20:09.373 --- 10.0.0.2 ping statistics --- 00:20:09.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.373 rtt min/avg/max/mdev = 0.595/0.595/0.595/0.000 ms 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:09.373 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:09.373 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:20:09.373 00:20:09.373 --- 10.0.0.1 ping statistics --- 00:20:09.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.373 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=944032 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 944032 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 944032 ']' 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:09.373 14:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:09.373 [2024-11-06 14:03:48.484741] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:20:09.373 [2024-11-06 14:03:48.484804] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.373 [2024-11-06 14:03:48.575249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.373 [2024-11-06 14:03:48.627282] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.373 [2024-11-06 14:03:48.627330] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.373 [2024-11-06 14:03:48.627339] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:09.373 [2024-11-06 14:03:48.627346] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:09.373 [2024-11-06 14:03:48.627353] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.373 [2024-11-06 14:03:48.628122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.311 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:10.312 [2024-11-06 14:03:49.301707] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:10.312 null0 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 3dffe9d197284d008b7435d123b29f6c 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:10.312 [2024-11-06 14:03:49.341907] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:10.312 nvme0n1 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:10.312 [ 00:20:10.312 { 00:20:10.312 "name": "nvme0n1", 00:20:10.312 "aliases": [ 00:20:10.312 "3dffe9d1-9728-4d00-8b74-35d123b29f6c" 00:20:10.312 ], 00:20:10.312 "product_name": "NVMe disk", 00:20:10.312 "block_size": 512, 00:20:10.312 "num_blocks": 2097152, 00:20:10.312 "uuid": "3dffe9d1-9728-4d00-8b74-35d123b29f6c", 00:20:10.312 "numa_id": 0, 00:20:10.312 "assigned_rate_limits": { 00:20:10.312 "rw_ios_per_sec": 0, 00:20:10.312 "rw_mbytes_per_sec": 0, 00:20:10.312 "r_mbytes_per_sec": 0, 00:20:10.312 "w_mbytes_per_sec": 0 00:20:10.312 }, 00:20:10.312 "claimed": false, 00:20:10.312 "zoned": false, 00:20:10.312 "supported_io_types": { 00:20:10.312 "read": true, 00:20:10.312 "write": true, 00:20:10.312 "unmap": false, 00:20:10.312 "flush": true, 00:20:10.312 "reset": true, 00:20:10.312 "nvme_admin": true, 00:20:10.312 "nvme_io": true, 00:20:10.312 "nvme_io_md": false, 00:20:10.312 "write_zeroes": true, 00:20:10.312 "zcopy": false, 00:20:10.312 "get_zone_info": false, 00:20:10.312 "zone_management": false, 00:20:10.312 "zone_append": false, 00:20:10.312 "compare": true, 00:20:10.312 "compare_and_write": true, 00:20:10.312 "abort": true, 00:20:10.312 "seek_hole": false, 00:20:10.312 "seek_data": false, 00:20:10.312 "copy": true, 00:20:10.312 "nvme_iov_md": false 00:20:10.312 }, 00:20:10.312 "memory_domains": [ 00:20:10.312 { 00:20:10.312 "dma_device_id": "system", 00:20:10.312 "dma_device_type": 1 00:20:10.312 } 00:20:10.312 ], 00:20:10.312 "driver_specific": { 00:20:10.312 "nvme": [ 00:20:10.312 { 00:20:10.312 "trid": { 00:20:10.312 "trtype": "TCP", 00:20:10.312 "adrfam": "IPv4", 00:20:10.312 "traddr": "10.0.0.2", 00:20:10.312 "trsvcid": "4420", 00:20:10.312 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:10.312 }, 00:20:10.312 "ctrlr_data": { 00:20:10.312 "cntlid": 1, 00:20:10.312 "vendor_id": "0x8086", 00:20:10.312 "model_number": "SPDK bdev Controller", 00:20:10.312 "serial_number": "00000000000000000000", 00:20:10.312 "firmware_revision": "25.01", 00:20:10.312 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:10.312 "oacs": { 00:20:10.312 "security": 0, 00:20:10.312 "format": 0, 00:20:10.312 "firmware": 0, 00:20:10.312 "ns_manage": 0 00:20:10.312 }, 00:20:10.312 "multi_ctrlr": true, 00:20:10.312 "ana_reporting": false 00:20:10.312 }, 00:20:10.312 "vs": { 00:20:10.312 "nvme_version": "1.3" 00:20:10.312 }, 00:20:10.312 "ns_data": { 00:20:10.312 "id": 1, 00:20:10.312 "can_share": true 00:20:10.312 } 00:20:10.312 } 00:20:10.312 ], 00:20:10.312 "mp_policy": "active_passive" 00:20:10.312 } 00:20:10.312 } 00:20:10.312 ] 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.312 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:10.312 [2024-11-06 14:03:49.590980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:10.312 [2024-11-06 14:03:49.591043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207c050 (9): Bad file descriptor 00:20:10.570 [2024-11-06 14:03:49.723351] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:20:10.570 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.570 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:10.570 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.570 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:10.570 [ 00:20:10.570 { 00:20:10.570 "name": "nvme0n1", 00:20:10.570 "aliases": [ 00:20:10.570 "3dffe9d1-9728-4d00-8b74-35d123b29f6c" 00:20:10.570 ], 00:20:10.570 "product_name": "NVMe disk", 00:20:10.570 "block_size": 512, 00:20:10.570 "num_blocks": 2097152, 00:20:10.570 "uuid": "3dffe9d1-9728-4d00-8b74-35d123b29f6c", 00:20:10.570 "numa_id": 0, 00:20:10.570 "assigned_rate_limits": { 00:20:10.570 "rw_ios_per_sec": 0, 00:20:10.570 "rw_mbytes_per_sec": 0, 00:20:10.570 "r_mbytes_per_sec": 0, 00:20:10.570 "w_mbytes_per_sec": 0 00:20:10.570 }, 00:20:10.570 "claimed": false, 00:20:10.570 "zoned": false, 00:20:10.570 "supported_io_types": { 00:20:10.570 "read": true, 00:20:10.570 "write": true, 00:20:10.570 "unmap": false, 00:20:10.570 "flush": true, 00:20:10.570 "reset": true, 00:20:10.570 "nvme_admin": true, 00:20:10.570 "nvme_io": true, 00:20:10.570 "nvme_io_md": false, 00:20:10.570 "write_zeroes": true, 00:20:10.570 "zcopy": false, 00:20:10.570 "get_zone_info": false, 00:20:10.570 "zone_management": false, 00:20:10.570 "zone_append": false, 00:20:10.570 "compare": true, 00:20:10.571 "compare_and_write": true, 00:20:10.571 "abort": true, 00:20:10.571 "seek_hole": false, 00:20:10.571 "seek_data": false, 00:20:10.571 "copy": true, 00:20:10.571 "nvme_iov_md": false 00:20:10.571 }, 00:20:10.571 "memory_domains": [ 00:20:10.571 { 00:20:10.571 "dma_device_id": "system", 00:20:10.571 "dma_device_type": 1 00:20:10.571 } 00:20:10.571 ], 00:20:10.571 "driver_specific": { 00:20:10.571 "nvme": [ 00:20:10.571 { 00:20:10.571 "trid": { 00:20:10.571 "trtype": "TCP", 00:20:10.571 "adrfam": "IPv4", 00:20:10.571 "traddr": "10.0.0.2", 00:20:10.571 "trsvcid": "4420", 00:20:10.571 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:10.571 }, 00:20:10.571 "ctrlr_data": { 00:20:10.571 "cntlid": 2, 00:20:10.571 "vendor_id": "0x8086", 00:20:10.571 "model_number": "SPDK bdev Controller", 00:20:10.571 "serial_number": "00000000000000000000", 00:20:10.571 "firmware_revision": "25.01", 00:20:10.571 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:10.571 "oacs": { 00:20:10.571 "security": 0, 00:20:10.571 "format": 0, 00:20:10.571 "firmware": 0, 00:20:10.571 "ns_manage": 0 00:20:10.571 }, 00:20:10.571 "multi_ctrlr": true, 00:20:10.571 "ana_reporting": false 00:20:10.571 }, 00:20:10.571 "vs": { 00:20:10.571 "nvme_version": "1.3" 00:20:10.571 }, 00:20:10.571 "ns_data": { 00:20:10.571 "id": 1, 00:20:10.571 "can_share": true 00:20:10.571 } 00:20:10.571 } 00:20:10.571 ], 00:20:10.571 "mp_policy": "active_passive" 00:20:10.571 } 00:20:10.571 } 00:20:10.571 ] 00:20:10.571 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.571 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.571 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.571 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:10.571 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.571 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:20:10.571 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.jDoffXvqoZ 00:20:10.571 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:10.571 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.jDoffXvqoZ 00:20:10.571 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.jDoffXvqoZ 00:20:10.571 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.571 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:10.571 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.571 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:10.571 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.571 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:10.571 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.571 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:10.571 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.571 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:10.571 [2024-11-06 14:03:49.783567] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:10.571 [2024-11-06 14:03:49.783679] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:10.571 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.571 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:20:10.571 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.571 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:10.571 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.571 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:10.571 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.571 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:10.571 [2024-11-06 14:03:49.799627] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:10.830 nvme0n1 00:20:10.830 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.830 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:10.830 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.830 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:10.830 [ 00:20:10.830 { 00:20:10.830 "name": "nvme0n1", 00:20:10.830 "aliases": [ 00:20:10.830 "3dffe9d1-9728-4d00-8b74-35d123b29f6c" 00:20:10.830 ], 00:20:10.830 "product_name": "NVMe disk", 00:20:10.830 "block_size": 512, 00:20:10.830 "num_blocks": 2097152, 00:20:10.830 "uuid": "3dffe9d1-9728-4d00-8b74-35d123b29f6c", 00:20:10.830 "numa_id": 0, 00:20:10.830 "assigned_rate_limits": { 00:20:10.830 "rw_ios_per_sec": 0, 00:20:10.830 "rw_mbytes_per_sec": 0, 00:20:10.830 "r_mbytes_per_sec": 0, 00:20:10.830 "w_mbytes_per_sec": 0 00:20:10.830 }, 00:20:10.830 "claimed": false, 00:20:10.830 "zoned": false, 00:20:10.830 "supported_io_types": { 00:20:10.830 "read": true, 00:20:10.830 "write": true, 00:20:10.830 "unmap": false, 00:20:10.830 "flush": true, 00:20:10.830 "reset": true, 00:20:10.830 "nvme_admin": true, 00:20:10.830 "nvme_io": true, 00:20:10.830 "nvme_io_md": false, 00:20:10.830 "write_zeroes": true, 00:20:10.830 "zcopy": false, 00:20:10.830 "get_zone_info": false, 00:20:10.830 "zone_management": false, 00:20:10.830 "zone_append": false, 00:20:10.830 "compare": true, 00:20:10.830 "compare_and_write": true, 00:20:10.830 "abort": true, 00:20:10.830 "seek_hole": false, 00:20:10.830 "seek_data": false, 00:20:10.830 "copy": true, 00:20:10.830 "nvme_iov_md": false 00:20:10.830 }, 00:20:10.830 "memory_domains": [ 00:20:10.830 { 00:20:10.830 "dma_device_id": "system", 00:20:10.830 "dma_device_type": 1 00:20:10.830 } 00:20:10.830 ], 00:20:10.830 "driver_specific": { 00:20:10.830 "nvme": [ 00:20:10.830 { 00:20:10.830 "trid": { 00:20:10.830 "trtype": "TCP", 00:20:10.830 "adrfam": "IPv4", 00:20:10.830 "traddr": "10.0.0.2", 00:20:10.830 "trsvcid": "4421", 00:20:10.830 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:10.830 }, 00:20:10.830 "ctrlr_data": { 00:20:10.830 "cntlid": 3, 00:20:10.830 "vendor_id": "0x8086", 00:20:10.830 "model_number": "SPDK bdev Controller", 00:20:10.830 "serial_number": "00000000000000000000", 00:20:10.830 "firmware_revision": "25.01", 00:20:10.830 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:10.830 "oacs": { 00:20:10.830 "security": 0, 00:20:10.830 "format": 0, 00:20:10.830 "firmware": 0, 00:20:10.830 "ns_manage": 0 00:20:10.830 }, 00:20:10.830 "multi_ctrlr": true, 00:20:10.830 "ana_reporting": false 00:20:10.830 }, 00:20:10.830 "vs": { 00:20:10.830 "nvme_version": "1.3" 00:20:10.830 }, 00:20:10.830 "ns_data": { 00:20:10.830 "id": 1, 00:20:10.830 "can_share": true 00:20:10.830 } 00:20:10.830 } 00:20:10.830 ], 00:20:10.830 "mp_policy": "active_passive" 00:20:10.830 } 00:20:10.830 } 00:20:10.830 ] 00:20:10.830 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.830 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.830 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.830 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:10.831 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.831 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.jDoffXvqoZ 00:20:10.831 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:20:10.831 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:20:10.831 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:10.831 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:20:10.831 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:10.831 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:20:10.831 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:10.831 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:10.831 rmmod nvme_tcp 00:20:10.831 rmmod nvme_fabrics 00:20:10.831 rmmod nvme_keyring 00:20:10.831 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:10.831 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:20:10.831 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:20:10.831 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 944032 ']' 00:20:10.831 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 944032 00:20:10.831 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 944032 ']' 00:20:10.831 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 944032 00:20:10.831 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:20:10.831 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:10.831 14:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 944032 00:20:10.831 14:03:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:10.831 14:03:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:10.831 14:03:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 944032' 00:20:10.831 killing process with pid 944032 00:20:10.831 14:03:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 944032 00:20:10.831 14:03:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 944032 00:20:11.090 14:03:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:11.090 14:03:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:11.090 14:03:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:11.090 14:03:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:20:11.090 14:03:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:20:11.090 14:03:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:20:11.090 14:03:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:11.090 14:03:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:11.090 14:03:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:11.090 14:03:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.090 14:03:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:11.090 14:03:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.995 14:03:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:12.995 00:20:12.995 real 0m9.278s 00:20:12.995 user 0m3.258s 00:20:12.995 sys 0m4.379s 00:20:12.995 14:03:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:12.995 14:03:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.995 ************************************ 00:20:12.995 END TEST nvmf_async_init 00:20:12.995 ************************************ 00:20:12.995 14:03:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:12.995 14:03:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:12.995 14:03:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:12.995 14:03:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.995 ************************************ 00:20:12.995 START TEST dma 00:20:12.995 ************************************ 00:20:12.995 14:03:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:12.995 * Looking for test storage... 00:20:12.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:12.995 14:03:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:12.995 14:03:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:20:12.995 14:03:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:13.256 14:03:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:13.256 14:03:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:13.256 14:03:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:13.256 14:03:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:13.256 14:03:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:20:13.256 14:03:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:20:13.256 14:03:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:20:13.256 14:03:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:20:13.256 14:03:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:20:13.256 14:03:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:20:13.256 14:03:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:20:13.256 14:03:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:13.256 14:03:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:20:13.256 14:03:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:20:13.256 14:03:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:13.256 14:03:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:13.256 14:03:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:20:13.256 14:03:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:20:13.256 14:03:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:13.256 14:03:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:20:13.256 14:03:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:20:13.256 14:03:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:20:13.256 14:03:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:20:13.256 14:03:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:13.256 14:03:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:20:13.256 14:03:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:20:13.256 14:03:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:13.256 14:03:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:13.256 14:03:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:20:13.256 14:03:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:13.256 14:03:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:13.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.256 --rc genhtml_branch_coverage=1 00:20:13.256 --rc genhtml_function_coverage=1 00:20:13.256 --rc genhtml_legend=1 00:20:13.256 --rc geninfo_all_blocks=1 00:20:13.256 --rc geninfo_unexecuted_blocks=1 00:20:13.256 00:20:13.256 ' 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:13.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.257 --rc genhtml_branch_coverage=1 00:20:13.257 --rc genhtml_function_coverage=1 00:20:13.257 --rc genhtml_legend=1 00:20:13.257 --rc geninfo_all_blocks=1 00:20:13.257 --rc geninfo_unexecuted_blocks=1 00:20:13.257 00:20:13.257 ' 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:13.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.257 --rc genhtml_branch_coverage=1 00:20:13.257 --rc genhtml_function_coverage=1 00:20:13.257 --rc genhtml_legend=1 00:20:13.257 --rc geninfo_all_blocks=1 00:20:13.257 --rc geninfo_unexecuted_blocks=1 00:20:13.257 00:20:13.257 ' 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:13.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.257 --rc genhtml_branch_coverage=1 00:20:13.257 --rc genhtml_function_coverage=1 00:20:13.257 --rc genhtml_legend=1 00:20:13.257 --rc geninfo_all_blocks=1 00:20:13.257 --rc geninfo_unexecuted_blocks=1 00:20:13.257 00:20:13.257 ' 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:13.257 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:20:13.257 00:20:13.257 real 0m0.144s 00:20:13.257 user 0m0.086s 00:20:13.257 sys 0m0.066s 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:13.257 ************************************ 00:20:13.257 END TEST dma 00:20:13.257 ************************************ 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.257 ************************************ 00:20:13.257 START TEST nvmf_identify 00:20:13.257 ************************************ 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:13.257 * Looking for test storage... 00:20:13.257 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:20:13.257 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:13.518 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:13.518 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:13.518 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:13.518 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:13.518 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:20:13.518 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:20:13.518 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:20:13.518 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:20:13.518 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:20:13.518 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:20:13.518 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:20:13.518 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:13.518 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:20:13.518 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:20:13.518 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:13.518 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:13.518 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:20:13.518 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:20:13.518 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:13.518 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:20:13.518 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:20:13.518 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:20:13.518 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:20:13.518 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:13.518 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:20:13.518 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:20:13.518 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:13.518 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:13.518 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:20:13.518 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:13.518 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:13.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.518 --rc genhtml_branch_coverage=1 00:20:13.518 --rc genhtml_function_coverage=1 00:20:13.518 --rc genhtml_legend=1 00:20:13.518 --rc geninfo_all_blocks=1 00:20:13.518 --rc geninfo_unexecuted_blocks=1 00:20:13.518 00:20:13.518 ' 00:20:13.518 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:13.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.518 --rc genhtml_branch_coverage=1 00:20:13.518 --rc genhtml_function_coverage=1 00:20:13.518 --rc genhtml_legend=1 00:20:13.518 --rc geninfo_all_blocks=1 00:20:13.518 --rc geninfo_unexecuted_blocks=1 00:20:13.518 00:20:13.518 ' 00:20:13.518 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:13.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.518 --rc genhtml_branch_coverage=1 00:20:13.518 --rc genhtml_function_coverage=1 00:20:13.518 --rc genhtml_legend=1 00:20:13.518 --rc geninfo_all_blocks=1 00:20:13.518 --rc geninfo_unexecuted_blocks=1 00:20:13.518 00:20:13.518 ' 00:20:13.518 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:13.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.518 --rc genhtml_branch_coverage=1 00:20:13.518 --rc genhtml_function_coverage=1 00:20:13.518 --rc genhtml_legend=1 00:20:13.518 --rc geninfo_all_blocks=1 00:20:13.518 --rc geninfo_unexecuted_blocks=1 00:20:13.518 00:20:13.518 ' 00:20:13.518 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:13.518 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:13.518 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:13.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:20:13.519 14:03:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:18.796 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:18.796 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:20:18.796 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:18.796 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:18.796 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:18.796 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:18.796 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:18.796 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:20:18.796 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:18.796 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:20:18.796 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:20:18.796 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:20:18.796 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:20:18.796 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:20:18.796 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:20:18.796 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:18.796 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:18.796 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:18.796 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:18.796 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:18.796 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:18.796 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:18.796 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:18.796 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:18.797 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:18.797 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:18.797 Found net devices under 0000:31:00.0: cvl_0_0 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:18.797 Found net devices under 0000:31:00.1: cvl_0_1 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:18.797 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:18.797 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:20:18.797 00:20:18.797 --- 10.0.0.2 ping statistics --- 00:20:18.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.797 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:18.797 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:18.797 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:20:18.797 00:20:18.797 --- 10.0.0.1 ping statistics --- 00:20:18.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.797 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:18.797 14:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:18.797 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:18.797 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:18.797 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=948775 00:20:18.797 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:18.797 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:18.797 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 948775 00:20:18.797 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 948775 ']' 00:20:18.797 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.797 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:18.797 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.797 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:18.797 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:18.797 [2024-11-06 14:03:58.046650] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:20:18.797 [2024-11-06 14:03:58.046716] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.058 [2024-11-06 14:03:58.138543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:19.058 [2024-11-06 14:03:58.193265] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.058 [2024-11-06 14:03:58.193310] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.058 [2024-11-06 14:03:58.193318] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:19.058 [2024-11-06 14:03:58.193325] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:19.058 [2024-11-06 14:03:58.193332] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.058 [2024-11-06 14:03:58.195714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.058 [2024-11-06 14:03:58.195853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.058 [2024-11-06 14:03:58.195995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:19.058 [2024-11-06 14:03:58.195996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.627 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:19.627 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:20:19.627 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:19.627 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.627 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:19.627 [2024-11-06 14:03:58.853841] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.627 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.627 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:19.627 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:19.627 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:19.627 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:19.627 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.627 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:19.889 Malloc0 00:20:19.889 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.889 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:19.889 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.889 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:19.889 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.889 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:19.889 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.889 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:19.889 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.889 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:19.889 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.889 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:19.889 [2024-11-06 14:03:58.935892] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.889 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.889 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:19.889 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.889 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:19.889 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.889 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:19.889 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.889 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:19.889 [ 00:20:19.889 { 00:20:19.889 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:19.889 "subtype": "Discovery", 00:20:19.889 "listen_addresses": [ 00:20:19.889 { 00:20:19.889 "trtype": "TCP", 00:20:19.889 "adrfam": "IPv4", 00:20:19.889 "traddr": "10.0.0.2", 00:20:19.889 "trsvcid": "4420" 00:20:19.889 } 00:20:19.889 ], 00:20:19.889 "allow_any_host": true, 00:20:19.889 "hosts": [] 00:20:19.889 }, 00:20:19.889 { 00:20:19.889 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.889 "subtype": "NVMe", 00:20:19.889 "listen_addresses": [ 00:20:19.889 { 00:20:19.889 "trtype": "TCP", 00:20:19.889 "adrfam": "IPv4", 00:20:19.889 "traddr": "10.0.0.2", 00:20:19.889 "trsvcid": "4420" 00:20:19.889 } 00:20:19.889 ], 00:20:19.889 "allow_any_host": true, 00:20:19.889 "hosts": [], 00:20:19.889 "serial_number": "SPDK00000000000001", 00:20:19.889 "model_number": "SPDK bdev Controller", 00:20:19.889 "max_namespaces": 32, 00:20:19.889 "min_cntlid": 1, 00:20:19.889 "max_cntlid": 65519, 00:20:19.889 "namespaces": [ 00:20:19.889 { 00:20:19.889 "nsid": 1, 00:20:19.889 "bdev_name": "Malloc0", 00:20:19.889 "name": "Malloc0", 00:20:19.889 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:19.889 "eui64": "ABCDEF0123456789", 00:20:19.889 "uuid": "3ae78ebd-85a9-47ea-b1d4-4c1538fed642" 00:20:19.889 } 00:20:19.889 ] 00:20:19.889 } 00:20:19.889 ] 00:20:19.889 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.889 14:03:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:19.889 [2024-11-06 14:03:58.972592] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:20:19.889 [2024-11-06 14:03:58.972623] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid949126 ] 00:20:19.889 [2024-11-06 14:03:59.026459] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:20:19.890 [2024-11-06 14:03:59.026512] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:19.890 [2024-11-06 14:03:59.026518] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:19.890 [2024-11-06 14:03:59.026534] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:19.890 [2024-11-06 14:03:59.026545] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:19.890 [2024-11-06 14:03:59.027175] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:20:19.890 [2024-11-06 14:03:59.027208] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1f54550 0 00:20:19.890 [2024-11-06 14:03:59.033258] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:19.890 [2024-11-06 14:03:59.033272] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:19.890 [2024-11-06 14:03:59.033277] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:19.890 [2024-11-06 14:03:59.033281] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:19.890 [2024-11-06 14:03:59.033313] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.890 [2024-11-06 14:03:59.033318] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.890 [2024-11-06 14:03:59.033323] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f54550) 00:20:19.890 [2024-11-06 14:03:59.033336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:19.890 [2024-11-06 14:03:59.033354] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6100, cid 0, qid 0 00:20:19.890 [2024-11-06 14:03:59.041256] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.890 [2024-11-06 14:03:59.041266] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.890 [2024-11-06 14:03:59.041269] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.890 [2024-11-06 14:03:59.041274] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6100) on tqpair=0x1f54550 00:20:19.890 [2024-11-06 14:03:59.041283] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:19.890 [2024-11-06 14:03:59.041290] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:20:19.890 [2024-11-06 14:03:59.041295] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:20:19.890 [2024-11-06 14:03:59.041309] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.890 [2024-11-06 14:03:59.041314] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.890 [2024-11-06 14:03:59.041317] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f54550) 00:20:19.890 [2024-11-06 14:03:59.041325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.890 [2024-11-06 14:03:59.041339] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6100, cid 0, qid 0 00:20:19.890 [2024-11-06 14:03:59.041545] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.890 [2024-11-06 14:03:59.041552] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.890 [2024-11-06 14:03:59.041555] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.890 [2024-11-06 14:03:59.041559] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6100) on tqpair=0x1f54550 00:20:19.890 [2024-11-06 14:03:59.041564] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:20:19.890 [2024-11-06 14:03:59.041572] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:20:19.890 [2024-11-06 14:03:59.041579] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.890 [2024-11-06 14:03:59.041583] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.890 [2024-11-06 14:03:59.041586] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f54550) 00:20:19.890 [2024-11-06 14:03:59.041593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.890 [2024-11-06 14:03:59.041607] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6100, cid 0, qid 0 00:20:19.890 [2024-11-06 14:03:59.041799] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.890 [2024-11-06 14:03:59.041805] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.890 [2024-11-06 14:03:59.041809] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.890 [2024-11-06 14:03:59.041813] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6100) on tqpair=0x1f54550 00:20:19.890 [2024-11-06 14:03:59.041818] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:20:19.890 [2024-11-06 14:03:59.041826] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:20:19.890 [2024-11-06 14:03:59.041832] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.890 [2024-11-06 14:03:59.041836] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.890 [2024-11-06 14:03:59.041840] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f54550) 00:20:19.890 [2024-11-06 14:03:59.041847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.890 [2024-11-06 14:03:59.041857] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6100, cid 0, qid 0 00:20:19.890 [2024-11-06 14:03:59.042046] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.890 [2024-11-06 14:03:59.042053] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.890 [2024-11-06 14:03:59.042056] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.890 [2024-11-06 14:03:59.042060] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6100) on tqpair=0x1f54550 00:20:19.890 [2024-11-06 14:03:59.042065] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:19.890 [2024-11-06 14:03:59.042075] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.890 [2024-11-06 14:03:59.042078] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.890 [2024-11-06 14:03:59.042082] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f54550) 00:20:19.890 [2024-11-06 14:03:59.042089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.890 [2024-11-06 14:03:59.042099] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6100, cid 0, qid 0 00:20:19.890 [2024-11-06 14:03:59.042279] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.890 [2024-11-06 14:03:59.042286] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.890 [2024-11-06 14:03:59.042289] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.890 [2024-11-06 14:03:59.042293] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6100) on tqpair=0x1f54550 00:20:19.890 [2024-11-06 14:03:59.042298] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:20:19.890 [2024-11-06 14:03:59.042303] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:20:19.890 [2024-11-06 14:03:59.042310] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:19.890 [2024-11-06 14:03:59.042418] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:20:19.890 [2024-11-06 14:03:59.042423] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:19.890 [2024-11-06 14:03:59.042432] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.890 [2024-11-06 14:03:59.042436] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.890 [2024-11-06 14:03:59.042443] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f54550) 00:20:19.890 [2024-11-06 14:03:59.042450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.890 [2024-11-06 14:03:59.042461] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6100, cid 0, qid 0 00:20:19.890 [2024-11-06 14:03:59.042645] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.890 [2024-11-06 14:03:59.042652] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.890 [2024-11-06 14:03:59.042655] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.890 [2024-11-06 14:03:59.042659] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6100) on tqpair=0x1f54550 00:20:19.890 [2024-11-06 14:03:59.042664] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:19.890 [2024-11-06 14:03:59.042673] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.890 [2024-11-06 14:03:59.042677] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.890 [2024-11-06 14:03:59.042680] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f54550) 00:20:19.890 [2024-11-06 14:03:59.042687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.890 [2024-11-06 14:03:59.042698] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6100, cid 0, qid 0 00:20:19.890 [2024-11-06 14:03:59.042891] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.890 [2024-11-06 14:03:59.042897] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.890 [2024-11-06 14:03:59.042901] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.890 [2024-11-06 14:03:59.042905] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6100) on tqpair=0x1f54550 00:20:19.890 [2024-11-06 14:03:59.042909] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:19.890 [2024-11-06 14:03:59.042914] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:20:19.890 [2024-11-06 14:03:59.042921] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:20:19.890 [2024-11-06 14:03:59.042929] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:20:19.890 [2024-11-06 14:03:59.042938] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.890 [2024-11-06 14:03:59.042941] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f54550) 00:20:19.890 [2024-11-06 14:03:59.042948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.890 [2024-11-06 14:03:59.042959] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6100, cid 0, qid 0 00:20:19.890 [2024-11-06 14:03:59.043185] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:19.890 [2024-11-06 14:03:59.043192] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:19.890 [2024-11-06 14:03:59.043195] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:19.890 [2024-11-06 14:03:59.043199] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f54550): datao=0, datal=4096, cccid=0 00:20:19.891 [2024-11-06 14:03:59.043204] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fb6100) on tqpair(0x1f54550): expected_datao=0, payload_size=4096 00:20:19.891 [2024-11-06 14:03:59.043209] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.891 [2024-11-06 14:03:59.043223] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:19.891 [2024-11-06 14:03:59.043227] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:19.891 [2024-11-06 14:03:59.084254] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.891 [2024-11-06 14:03:59.084270] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.891 [2024-11-06 14:03:59.084274] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.891 [2024-11-06 14:03:59.084278] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6100) on tqpair=0x1f54550 00:20:19.891 [2024-11-06 14:03:59.084288] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:20:19.891 [2024-11-06 14:03:59.084293] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:20:19.891 [2024-11-06 14:03:59.084297] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:20:19.891 [2024-11-06 14:03:59.084306] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:20:19.891 [2024-11-06 14:03:59.084311] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:20:19.891 [2024-11-06 14:03:59.084316] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:20:19.891 [2024-11-06 14:03:59.084327] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:20:19.891 [2024-11-06 14:03:59.084334] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.891 [2024-11-06 14:03:59.084339] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.891 [2024-11-06 14:03:59.084342] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f54550) 00:20:19.891 [2024-11-06 14:03:59.084351] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:19.891 [2024-11-06 14:03:59.084365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6100, cid 0, qid 0 00:20:19.891 [2024-11-06 14:03:59.084560] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.891 [2024-11-06 14:03:59.084567] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.891 [2024-11-06 14:03:59.084570] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.891 [2024-11-06 14:03:59.084574] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6100) on tqpair=0x1f54550 00:20:19.891 [2024-11-06 14:03:59.084582] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.891 [2024-11-06 14:03:59.084585] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.891 [2024-11-06 14:03:59.084589] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f54550) 00:20:19.891 [2024-11-06 14:03:59.084595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.891 [2024-11-06 14:03:59.084602] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.891 [2024-11-06 14:03:59.084606] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.891 [2024-11-06 14:03:59.084609] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1f54550) 00:20:19.891 [2024-11-06 14:03:59.084615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.891 [2024-11-06 14:03:59.084621] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.891 [2024-11-06 14:03:59.084625] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.891 [2024-11-06 14:03:59.084629] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1f54550) 00:20:19.891 [2024-11-06 14:03:59.084635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.891 [2024-11-06 14:03:59.084641] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.891 [2024-11-06 14:03:59.084647] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.891 [2024-11-06 14:03:59.084651] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f54550) 00:20:19.891 [2024-11-06 14:03:59.084657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.891 [2024-11-06 14:03:59.084661] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:19.891 [2024-11-06 14:03:59.084669] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:19.891 [2024-11-06 14:03:59.084676] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.891 [2024-11-06 14:03:59.084679] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f54550) 00:20:19.891 [2024-11-06 14:03:59.084686] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.891 [2024-11-06 14:03:59.084698] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6100, cid 0, qid 0 00:20:19.891 [2024-11-06 14:03:59.084704] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6280, cid 1, qid 0 00:20:19.891 [2024-11-06 14:03:59.084709] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6400, cid 2, qid 0 00:20:19.891 [2024-11-06 14:03:59.084713] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6580, cid 3, qid 0 00:20:19.891 [2024-11-06 14:03:59.084718] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6700, cid 4, qid 0 00:20:19.891 [2024-11-06 14:03:59.084955] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.891 [2024-11-06 14:03:59.084961] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.891 [2024-11-06 14:03:59.084965] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.891 [2024-11-06 14:03:59.084969] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6700) on tqpair=0x1f54550 00:20:19.891 [2024-11-06 14:03:59.084976] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:20:19.891 [2024-11-06 14:03:59.084981] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:20:19.891 [2024-11-06 14:03:59.084992] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.891 [2024-11-06 14:03:59.084996] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f54550) 00:20:19.891 [2024-11-06 14:03:59.085003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.891 [2024-11-06 14:03:59.085013] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6700, cid 4, qid 0 00:20:19.891 [2024-11-06 14:03:59.085188] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:19.891 [2024-11-06 14:03:59.085195] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:19.891 [2024-11-06 14:03:59.085198] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:19.891 [2024-11-06 14:03:59.085202] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f54550): datao=0, datal=4096, cccid=4 00:20:19.891 [2024-11-06 14:03:59.085207] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fb6700) on tqpair(0x1f54550): expected_datao=0, payload_size=4096 00:20:19.891 [2024-11-06 14:03:59.085211] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.891 [2024-11-06 14:03:59.085218] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:19.891 [2024-11-06 14:03:59.085222] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:19.891 [2024-11-06 14:03:59.085389] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.891 [2024-11-06 14:03:59.085395] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.891 [2024-11-06 14:03:59.085402] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.891 [2024-11-06 14:03:59.085406] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6700) on tqpair=0x1f54550 00:20:19.891 [2024-11-06 14:03:59.085419] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:20:19.891 [2024-11-06 14:03:59.085441] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.891 [2024-11-06 14:03:59.085446] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f54550) 00:20:19.891 [2024-11-06 14:03:59.085453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.891 [2024-11-06 14:03:59.085460] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.891 [2024-11-06 14:03:59.085464] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.891 [2024-11-06 14:03:59.085467] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f54550) 00:20:19.891 [2024-11-06 14:03:59.085473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.891 [2024-11-06 14:03:59.085487] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6700, cid 4, qid 0 00:20:19.891 [2024-11-06 14:03:59.085493] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6880, cid 5, qid 0 00:20:19.891 [2024-11-06 14:03:59.085712] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:19.891 [2024-11-06 14:03:59.085719] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:19.891 [2024-11-06 14:03:59.085722] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:19.891 [2024-11-06 14:03:59.085726] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f54550): datao=0, datal=1024, cccid=4 00:20:19.891 [2024-11-06 14:03:59.085731] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fb6700) on tqpair(0x1f54550): expected_datao=0, payload_size=1024 00:20:19.891 [2024-11-06 14:03:59.085735] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.891 [2024-11-06 14:03:59.085742] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:19.891 [2024-11-06 14:03:59.085745] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:19.891 [2024-11-06 14:03:59.085751] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.891 [2024-11-06 14:03:59.085757] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.891 [2024-11-06 14:03:59.085760] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.891 [2024-11-06 14:03:59.085764] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6880) on tqpair=0x1f54550 00:20:19.891 [2024-11-06 14:03:59.127252] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.891 [2024-11-06 14:03:59.127261] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.891 [2024-11-06 14:03:59.127265] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.891 [2024-11-06 14:03:59.127269] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6700) on tqpair=0x1f54550 00:20:19.891 [2024-11-06 14:03:59.127280] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.891 [2024-11-06 14:03:59.127284] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f54550) 00:20:19.891 [2024-11-06 14:03:59.127291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.891 [2024-11-06 14:03:59.127306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6700, cid 4, qid 0 00:20:19.892 [2024-11-06 14:03:59.127494] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:19.892 [2024-11-06 14:03:59.127501] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:19.892 [2024-11-06 14:03:59.127504] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:19.892 [2024-11-06 14:03:59.127508] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f54550): datao=0, datal=3072, cccid=4 00:20:19.892 [2024-11-06 14:03:59.127515] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fb6700) on tqpair(0x1f54550): expected_datao=0, payload_size=3072 00:20:19.892 [2024-11-06 14:03:59.127520] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.892 [2024-11-06 14:03:59.127527] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:19.892 [2024-11-06 14:03:59.127530] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:19.892 [2024-11-06 14:03:59.127685] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.892 [2024-11-06 14:03:59.127691] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.892 [2024-11-06 14:03:59.127695] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.892 [2024-11-06 14:03:59.127699] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6700) on tqpair=0x1f54550 00:20:19.892 [2024-11-06 14:03:59.127707] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.892 [2024-11-06 14:03:59.127711] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f54550) 00:20:19.892 [2024-11-06 14:03:59.127717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.892 [2024-11-06 14:03:59.127731] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6700, cid 4, qid 0 00:20:19.892 [2024-11-06 14:03:59.127973] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:19.892 [2024-11-06 14:03:59.127980] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:19.892 [2024-11-06 14:03:59.127983] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:19.892 [2024-11-06 14:03:59.127987] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f54550): datao=0, datal=8, cccid=4 00:20:19.892 [2024-11-06 14:03:59.127991] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fb6700) on tqpair(0x1f54550): expected_datao=0, payload_size=8 00:20:19.892 [2024-11-06 14:03:59.127996] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.892 [2024-11-06 14:03:59.128002] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:19.892 [2024-11-06 14:03:59.128006] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:19.892 [2024-11-06 14:03:59.170255] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.892 [2024-11-06 14:03:59.170264] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.892 [2024-11-06 14:03:59.170268] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.892 [2024-11-06 14:03:59.170272] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6700) on tqpair=0x1f54550 00:20:19.892 ===================================================== 00:20:19.892 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:19.892 ===================================================== 00:20:19.892 Controller Capabilities/Features 00:20:19.892 ================================ 00:20:19.892 Vendor ID: 0000 00:20:19.892 Subsystem Vendor ID: 0000 00:20:19.892 Serial Number: .................... 00:20:19.892 Model Number: ........................................ 00:20:19.892 Firmware Version: 25.01 00:20:19.892 Recommended Arb Burst: 0 00:20:19.892 IEEE OUI Identifier: 00 00 00 00:20:19.892 Multi-path I/O 00:20:19.892 May have multiple subsystem ports: No 00:20:19.892 May have multiple controllers: No 00:20:19.892 Associated with SR-IOV VF: No 00:20:19.892 Max Data Transfer Size: 131072 00:20:19.892 Max Number of Namespaces: 0 00:20:19.892 Max Number of I/O Queues: 1024 00:20:19.892 NVMe Specification Version (VS): 1.3 00:20:19.892 NVMe Specification Version (Identify): 1.3 00:20:19.892 Maximum Queue Entries: 128 00:20:19.892 Contiguous Queues Required: Yes 00:20:19.892 Arbitration Mechanisms Supported 00:20:19.892 Weighted Round Robin: Not Supported 00:20:19.892 Vendor Specific: Not Supported 00:20:19.892 Reset Timeout: 15000 ms 00:20:19.892 Doorbell Stride: 4 bytes 00:20:19.892 NVM Subsystem Reset: Not Supported 00:20:19.892 Command Sets Supported 00:20:19.892 NVM Command Set: Supported 00:20:19.892 Boot Partition: Not Supported 00:20:19.892 Memory Page Size Minimum: 4096 bytes 00:20:19.892 Memory Page Size Maximum: 4096 bytes 00:20:19.892 Persistent Memory Region: Not Supported 00:20:19.892 Optional Asynchronous Events Supported 00:20:19.892 Namespace Attribute Notices: Not Supported 00:20:19.892 Firmware Activation Notices: Not Supported 00:20:19.892 ANA Change Notices: Not Supported 00:20:19.892 PLE Aggregate Log Change Notices: Not Supported 00:20:19.892 LBA Status Info Alert Notices: Not Supported 00:20:19.892 EGE Aggregate Log Change Notices: Not Supported 00:20:19.892 Normal NVM Subsystem Shutdown event: Not Supported 00:20:19.892 Zone Descriptor Change Notices: Not Supported 00:20:19.892 Discovery Log Change Notices: Supported 00:20:19.892 Controller Attributes 00:20:19.892 128-bit Host Identifier: Not Supported 00:20:19.892 Non-Operational Permissive Mode: Not Supported 00:20:19.892 NVM Sets: Not Supported 00:20:19.892 Read Recovery Levels: Not Supported 00:20:19.892 Endurance Groups: Not Supported 00:20:19.892 Predictable Latency Mode: Not Supported 00:20:19.892 Traffic Based Keep ALive: Not Supported 00:20:19.892 Namespace Granularity: Not Supported 00:20:19.892 SQ Associations: Not Supported 00:20:19.892 UUID List: Not Supported 00:20:19.892 Multi-Domain Subsystem: Not Supported 00:20:19.892 Fixed Capacity Management: Not Supported 00:20:19.892 Variable Capacity Management: Not Supported 00:20:19.892 Delete Endurance Group: Not Supported 00:20:19.892 Delete NVM Set: Not Supported 00:20:19.892 Extended LBA Formats Supported: Not Supported 00:20:19.892 Flexible Data Placement Supported: Not Supported 00:20:19.892 00:20:19.892 Controller Memory Buffer Support 00:20:19.892 ================================ 00:20:19.892 Supported: No 00:20:19.892 00:20:19.892 Persistent Memory Region Support 00:20:19.892 ================================ 00:20:19.892 Supported: No 00:20:19.892 00:20:19.892 Admin Command Set Attributes 00:20:19.892 ============================ 00:20:19.892 Security Send/Receive: Not Supported 00:20:19.892 Format NVM: Not Supported 00:20:19.892 Firmware Activate/Download: Not Supported 00:20:19.892 Namespace Management: Not Supported 00:20:19.892 Device Self-Test: Not Supported 00:20:19.892 Directives: Not Supported 00:20:19.892 NVMe-MI: Not Supported 00:20:19.892 Virtualization Management: Not Supported 00:20:19.892 Doorbell Buffer Config: Not Supported 00:20:19.892 Get LBA Status Capability: Not Supported 00:20:19.892 Command & Feature Lockdown Capability: Not Supported 00:20:19.892 Abort Command Limit: 1 00:20:19.892 Async Event Request Limit: 4 00:20:19.892 Number of Firmware Slots: N/A 00:20:19.892 Firmware Slot 1 Read-Only: N/A 00:20:19.892 Firmware Activation Without Reset: N/A 00:20:19.892 Multiple Update Detection Support: N/A 00:20:19.892 Firmware Update Granularity: No Information Provided 00:20:19.892 Per-Namespace SMART Log: No 00:20:19.892 Asymmetric Namespace Access Log Page: Not Supported 00:20:19.892 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:19.892 Command Effects Log Page: Not Supported 00:20:19.892 Get Log Page Extended Data: Supported 00:20:19.892 Telemetry Log Pages: Not Supported 00:20:19.892 Persistent Event Log Pages: Not Supported 00:20:19.892 Supported Log Pages Log Page: May Support 00:20:19.892 Commands Supported & Effects Log Page: Not Supported 00:20:19.892 Feature Identifiers & Effects Log Page:May Support 00:20:19.892 NVMe-MI Commands & Effects Log Page: May Support 00:20:19.892 Data Area 4 for Telemetry Log: Not Supported 00:20:19.892 Error Log Page Entries Supported: 128 00:20:19.892 Keep Alive: Not Supported 00:20:19.892 00:20:19.892 NVM Command Set Attributes 00:20:19.892 ========================== 00:20:19.892 Submission Queue Entry Size 00:20:19.892 Max: 1 00:20:19.892 Min: 1 00:20:19.892 Completion Queue Entry Size 00:20:19.892 Max: 1 00:20:19.892 Min: 1 00:20:19.892 Number of Namespaces: 0 00:20:19.892 Compare Command: Not Supported 00:20:19.892 Write Uncorrectable Command: Not Supported 00:20:19.892 Dataset Management Command: Not Supported 00:20:19.892 Write Zeroes Command: Not Supported 00:20:19.892 Set Features Save Field: Not Supported 00:20:19.892 Reservations: Not Supported 00:20:19.892 Timestamp: Not Supported 00:20:19.892 Copy: Not Supported 00:20:19.892 Volatile Write Cache: Not Present 00:20:19.892 Atomic Write Unit (Normal): 1 00:20:19.892 Atomic Write Unit (PFail): 1 00:20:19.892 Atomic Compare & Write Unit: 1 00:20:19.892 Fused Compare & Write: Supported 00:20:19.892 Scatter-Gather List 00:20:19.892 SGL Command Set: Supported 00:20:19.892 SGL Keyed: Supported 00:20:19.892 SGL Bit Bucket Descriptor: Not Supported 00:20:19.892 SGL Metadata Pointer: Not Supported 00:20:19.892 Oversized SGL: Not Supported 00:20:19.892 SGL Metadata Address: Not Supported 00:20:19.892 SGL Offset: Supported 00:20:19.892 Transport SGL Data Block: Not Supported 00:20:19.892 Replay Protected Memory Block: Not Supported 00:20:19.892 00:20:19.892 Firmware Slot Information 00:20:19.892 ========================= 00:20:19.892 Active slot: 0 00:20:19.892 00:20:19.892 00:20:19.892 Error Log 00:20:19.892 ========= 00:20:19.892 00:20:19.892 Active Namespaces 00:20:19.892 ================= 00:20:19.892 Discovery Log Page 00:20:19.892 ================== 00:20:19.893 Generation Counter: 2 00:20:19.893 Number of Records: 2 00:20:19.893 Record Format: 0 00:20:19.893 00:20:19.893 Discovery Log Entry 0 00:20:19.893 ---------------------- 00:20:19.893 Transport Type: 3 (TCP) 00:20:19.893 Address Family: 1 (IPv4) 00:20:19.893 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:19.893 Entry Flags: 00:20:19.893 Duplicate Returned Information: 1 00:20:19.893 Explicit Persistent Connection Support for Discovery: 1 00:20:19.893 Transport Requirements: 00:20:19.893 Secure Channel: Not Required 00:20:19.893 Port ID: 0 (0x0000) 00:20:19.893 Controller ID: 65535 (0xffff) 00:20:19.893 Admin Max SQ Size: 128 00:20:19.893 Transport Service Identifier: 4420 00:20:19.893 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:19.893 Transport Address: 10.0.0.2 00:20:19.893 Discovery Log Entry 1 00:20:19.893 ---------------------- 00:20:19.893 Transport Type: 3 (TCP) 00:20:19.893 Address Family: 1 (IPv4) 00:20:19.893 Subsystem Type: 2 (NVM Subsystem) 00:20:19.893 Entry Flags: 00:20:19.893 Duplicate Returned Information: 0 00:20:19.893 Explicit Persistent Connection Support for Discovery: 0 00:20:19.893 Transport Requirements: 00:20:19.893 Secure Channel: Not Required 00:20:19.893 Port ID: 0 (0x0000) 00:20:19.893 Controller ID: 65535 (0xffff) 00:20:19.893 Admin Max SQ Size: 128 00:20:19.893 Transport Service Identifier: 4420 00:20:19.893 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:19.893 Transport Address: 10.0.0.2 [2024-11-06 14:03:59.170362] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:20:19.893 [2024-11-06 14:03:59.170372] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6100) on tqpair=0x1f54550 00:20:19.893 [2024-11-06 14:03:59.170379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.893 [2024-11-06 14:03:59.170384] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6280) on tqpair=0x1f54550 00:20:19.893 [2024-11-06 14:03:59.170389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.893 [2024-11-06 14:03:59.170394] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6400) on tqpair=0x1f54550 00:20:19.893 [2024-11-06 14:03:59.170399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.893 [2024-11-06 14:03:59.170404] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6580) on tqpair=0x1f54550 00:20:19.893 [2024-11-06 14:03:59.170408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.893 [2024-11-06 14:03:59.170419] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.893 [2024-11-06 14:03:59.170423] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.893 [2024-11-06 14:03:59.170428] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f54550) 00:20:19.893 [2024-11-06 14:03:59.170435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.893 [2024-11-06 14:03:59.170448] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6580, cid 3, qid 0 00:20:19.893 [2024-11-06 14:03:59.170630] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.893 [2024-11-06 14:03:59.170636] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.893 [2024-11-06 14:03:59.170640] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.893 [2024-11-06 14:03:59.170644] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6580) on tqpair=0x1f54550 00:20:19.893 [2024-11-06 14:03:59.170651] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.893 [2024-11-06 14:03:59.170655] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.893 [2024-11-06 14:03:59.170658] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f54550) 00:20:19.893 [2024-11-06 14:03:59.170665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.893 [2024-11-06 14:03:59.170678] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6580, cid 3, qid 0 00:20:19.893 [2024-11-06 14:03:59.170890] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.893 [2024-11-06 14:03:59.170896] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.893 [2024-11-06 14:03:59.170899] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.893 [2024-11-06 14:03:59.170903] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6580) on tqpair=0x1f54550 00:20:19.893 [2024-11-06 14:03:59.170908] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:20:19.893 [2024-11-06 14:03:59.170913] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:20:19.893 [2024-11-06 14:03:59.170922] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.893 [2024-11-06 14:03:59.170926] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.893 [2024-11-06 14:03:59.170930] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f54550) 00:20:19.893 [2024-11-06 14:03:59.170937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.893 [2024-11-06 14:03:59.170947] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6580, cid 3, qid 0 00:20:19.893 [2024-11-06 14:03:59.171103] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.893 [2024-11-06 14:03:59.171110] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.893 [2024-11-06 14:03:59.171113] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.893 [2024-11-06 14:03:59.171117] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6580) on tqpair=0x1f54550 00:20:19.893 [2024-11-06 14:03:59.171127] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.893 [2024-11-06 14:03:59.171131] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.893 [2024-11-06 14:03:59.171135] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f54550) 00:20:19.893 [2024-11-06 14:03:59.171142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.893 [2024-11-06 14:03:59.171152] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6580, cid 3, qid 0 00:20:20.170 [2024-11-06 14:03:59.171340] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.170 [2024-11-06 14:03:59.171348] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.170 [2024-11-06 14:03:59.171352] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.170 [2024-11-06 14:03:59.171356] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6580) on tqpair=0x1f54550 00:20:20.170 [2024-11-06 14:03:59.171368] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.170 [2024-11-06 14:03:59.171372] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.170 [2024-11-06 14:03:59.171375] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f54550) 00:20:20.170 [2024-11-06 14:03:59.171382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.170 [2024-11-06 14:03:59.171393] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6580, cid 3, qid 0 00:20:20.170 [2024-11-06 14:03:59.171599] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.170 [2024-11-06 14:03:59.171606] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.170 [2024-11-06 14:03:59.171609] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.170 [2024-11-06 14:03:59.171613] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6580) on tqpair=0x1f54550 00:20:20.170 [2024-11-06 14:03:59.171623] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.170 [2024-11-06 14:03:59.171627] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.170 [2024-11-06 14:03:59.171630] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f54550) 00:20:20.171 [2024-11-06 14:03:59.171637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.171 [2024-11-06 14:03:59.171647] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6580, cid 3, qid 0 00:20:20.171 [2024-11-06 14:03:59.171822] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.171 [2024-11-06 14:03:59.171828] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.171 [2024-11-06 14:03:59.171832] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.171 [2024-11-06 14:03:59.171836] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6580) on tqpair=0x1f54550 00:20:20.171 [2024-11-06 14:03:59.171846] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.171 [2024-11-06 14:03:59.171849] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.171 [2024-11-06 14:03:59.171853] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f54550) 00:20:20.171 [2024-11-06 14:03:59.171860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.171 [2024-11-06 14:03:59.171870] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6580, cid 3, qid 0 00:20:20.171 [2024-11-06 14:03:59.172048] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.171 [2024-11-06 14:03:59.172054] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.171 [2024-11-06 14:03:59.172058] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.171 [2024-11-06 14:03:59.172062] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6580) on tqpair=0x1f54550 00:20:20.171 [2024-11-06 14:03:59.172071] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.171 [2024-11-06 14:03:59.172075] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.171 [2024-11-06 14:03:59.172079] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f54550) 00:20:20.171 [2024-11-06 14:03:59.172085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.171 [2024-11-06 14:03:59.172095] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6580, cid 3, qid 0 00:20:20.171 [2024-11-06 14:03:59.172310] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.171 [2024-11-06 14:03:59.172316] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.171 [2024-11-06 14:03:59.172320] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.171 [2024-11-06 14:03:59.172324] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6580) on tqpair=0x1f54550 00:20:20.171 [2024-11-06 14:03:59.172333] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.171 [2024-11-06 14:03:59.172339] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.171 [2024-11-06 14:03:59.172343] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f54550) 00:20:20.171 [2024-11-06 14:03:59.172350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.171 [2024-11-06 14:03:59.172360] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6580, cid 3, qid 0 00:20:20.171 [2024-11-06 14:03:59.172570] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.171 [2024-11-06 14:03:59.172576] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.171 [2024-11-06 14:03:59.172579] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.171 [2024-11-06 14:03:59.172583] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6580) on tqpair=0x1f54550 00:20:20.171 [2024-11-06 14:03:59.172593] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.171 [2024-11-06 14:03:59.172597] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.171 [2024-11-06 14:03:59.172601] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f54550) 00:20:20.171 [2024-11-06 14:03:59.172607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.171 [2024-11-06 14:03:59.172617] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6580, cid 3, qid 0 00:20:20.171 [2024-11-06 14:03:59.172783] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.171 [2024-11-06 14:03:59.172789] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.171 [2024-11-06 14:03:59.172792] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.171 [2024-11-06 14:03:59.172796] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6580) on tqpair=0x1f54550 00:20:20.171 [2024-11-06 14:03:59.172806] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.171 [2024-11-06 14:03:59.172810] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.171 [2024-11-06 14:03:59.172813] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f54550) 00:20:20.171 [2024-11-06 14:03:59.172820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.171 [2024-11-06 14:03:59.172830] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6580, cid 3, qid 0 00:20:20.171 [2024-11-06 14:03:59.173035] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.171 [2024-11-06 14:03:59.173042] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.171 [2024-11-06 14:03:59.173045] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.171 [2024-11-06 14:03:59.173049] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6580) on tqpair=0x1f54550 00:20:20.171 [2024-11-06 14:03:59.173059] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.171 [2024-11-06 14:03:59.173063] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.171 [2024-11-06 14:03:59.173066] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f54550) 00:20:20.171 [2024-11-06 14:03:59.173073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.171 [2024-11-06 14:03:59.173083] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6580, cid 3, qid 0 00:20:20.171 [2024-11-06 14:03:59.173265] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.171 [2024-11-06 14:03:59.173271] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.171 [2024-11-06 14:03:59.173275] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.171 [2024-11-06 14:03:59.173279] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6580) on tqpair=0x1f54550 00:20:20.171 [2024-11-06 14:03:59.173289] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.171 [2024-11-06 14:03:59.173293] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.171 [2024-11-06 14:03:59.173298] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f54550) 00:20:20.171 [2024-11-06 14:03:59.173305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.171 [2024-11-06 14:03:59.173316] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6580, cid 3, qid 0 00:20:20.171 [2024-11-06 14:03:59.173512] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.171 [2024-11-06 14:03:59.173519] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.171 [2024-11-06 14:03:59.173522] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.171 [2024-11-06 14:03:59.173526] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6580) on tqpair=0x1f54550 00:20:20.171 [2024-11-06 14:03:59.173536] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.171 [2024-11-06 14:03:59.173540] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.171 [2024-11-06 14:03:59.173543] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f54550) 00:20:20.171 [2024-11-06 14:03:59.173550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.171 [2024-11-06 14:03:59.173560] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6580, cid 3, qid 0 00:20:20.171 [2024-11-06 14:03:59.173732] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.171 [2024-11-06 14:03:59.173738] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.171 [2024-11-06 14:03:59.173741] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.171 [2024-11-06 14:03:59.173745] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6580) on tqpair=0x1f54550 00:20:20.171 [2024-11-06 14:03:59.173755] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.171 [2024-11-06 14:03:59.173759] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.171 [2024-11-06 14:03:59.173762] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f54550) 00:20:20.171 [2024-11-06 14:03:59.173769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.171 [2024-11-06 14:03:59.173779] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6580, cid 3, qid 0 00:20:20.171 [2024-11-06 14:03:59.173975] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.171 [2024-11-06 14:03:59.173982] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.171 [2024-11-06 14:03:59.173985] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.171 [2024-11-06 14:03:59.173989] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6580) on tqpair=0x1f54550 00:20:20.171 [2024-11-06 14:03:59.173999] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.171 [2024-11-06 14:03:59.174003] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.171 [2024-11-06 14:03:59.174006] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f54550) 00:20:20.171 [2024-11-06 14:03:59.174013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.171 [2024-11-06 14:03:59.174023] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6580, cid 3, qid 0 00:20:20.171 [2024-11-06 14:03:59.174194] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.171 [2024-11-06 14:03:59.174200] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.171 [2024-11-06 14:03:59.174204] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.171 [2024-11-06 14:03:59.174208] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6580) on tqpair=0x1f54550 00:20:20.171 [2024-11-06 14:03:59.174217] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.171 [2024-11-06 14:03:59.174221] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.171 [2024-11-06 14:03:59.174225] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f54550) 00:20:20.171 [2024-11-06 14:03:59.174233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.171 [2024-11-06 14:03:59.178249] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fb6580, cid 3, qid 0 00:20:20.171 [2024-11-06 14:03:59.178259] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.171 [2024-11-06 14:03:59.178266] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.171 [2024-11-06 14:03:59.178269] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.171 [2024-11-06 14:03:59.178273] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fb6580) on tqpair=0x1f54550 00:20:20.171 [2024-11-06 14:03:59.178281] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:20:20.171 00:20:20.171 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:20.171 [2024-11-06 14:03:59.204421] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:20:20.171 [2024-11-06 14:03:59.204452] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid949131 ] 00:20:20.171 [2024-11-06 14:03:59.254686] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:20:20.171 [2024-11-06 14:03:59.254732] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:20.171 [2024-11-06 14:03:59.254737] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:20.171 [2024-11-06 14:03:59.254750] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:20.171 [2024-11-06 14:03:59.254760] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:20.171 [2024-11-06 14:03:59.258447] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:20:20.171 [2024-11-06 14:03:59.258475] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c14550 0 00:20:20.171 [2024-11-06 14:03:59.266254] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:20.171 [2024-11-06 14:03:59.266265] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:20.171 [2024-11-06 14:03:59.266269] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:20.171 [2024-11-06 14:03:59.266273] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:20.171 [2024-11-06 14:03:59.266300] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.171 [2024-11-06 14:03:59.266306] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.171 [2024-11-06 14:03:59.266310] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c14550) 00:20:20.171 [2024-11-06 14:03:59.266321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:20.171 [2024-11-06 14:03:59.266338] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76100, cid 0, qid 0 00:20:20.171 [2024-11-06 14:03:59.274254] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.171 [2024-11-06 14:03:59.274264] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.171 [2024-11-06 14:03:59.274268] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.274272] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76100) on tqpair=0x1c14550 00:20:20.172 [2024-11-06 14:03:59.274281] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:20.172 [2024-11-06 14:03:59.274291] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:20:20.172 [2024-11-06 14:03:59.274296] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:20:20.172 [2024-11-06 14:03:59.274308] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.274312] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.274316] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c14550) 00:20:20.172 [2024-11-06 14:03:59.274324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.172 [2024-11-06 14:03:59.274337] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76100, cid 0, qid 0 00:20:20.172 [2024-11-06 14:03:59.274563] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.172 [2024-11-06 14:03:59.274569] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.172 [2024-11-06 14:03:59.274573] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.274577] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76100) on tqpair=0x1c14550 00:20:20.172 [2024-11-06 14:03:59.274582] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:20:20.172 [2024-11-06 14:03:59.274589] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:20:20.172 [2024-11-06 14:03:59.274596] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.274600] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.274603] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c14550) 00:20:20.172 [2024-11-06 14:03:59.274610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.172 [2024-11-06 14:03:59.274621] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76100, cid 0, qid 0 00:20:20.172 [2024-11-06 14:03:59.274836] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.172 [2024-11-06 14:03:59.274843] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.172 [2024-11-06 14:03:59.274846] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.274850] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76100) on tqpair=0x1c14550 00:20:20.172 [2024-11-06 14:03:59.274855] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:20:20.172 [2024-11-06 14:03:59.274863] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:20:20.172 [2024-11-06 14:03:59.274870] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.274874] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.274877] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c14550) 00:20:20.172 [2024-11-06 14:03:59.274884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.172 [2024-11-06 14:03:59.274894] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76100, cid 0, qid 0 00:20:20.172 [2024-11-06 14:03:59.275050] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.172 [2024-11-06 14:03:59.275057] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.172 [2024-11-06 14:03:59.275060] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.275064] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76100) on tqpair=0x1c14550 00:20:20.172 [2024-11-06 14:03:59.275069] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:20.172 [2024-11-06 14:03:59.275081] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.275085] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.275088] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c14550) 00:20:20.172 [2024-11-06 14:03:59.275095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.172 [2024-11-06 14:03:59.275106] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76100, cid 0, qid 0 00:20:20.172 [2024-11-06 14:03:59.275317] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.172 [2024-11-06 14:03:59.275323] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.172 [2024-11-06 14:03:59.275327] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.275331] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76100) on tqpair=0x1c14550 00:20:20.172 [2024-11-06 14:03:59.275335] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:20:20.172 [2024-11-06 14:03:59.275340] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:20:20.172 [2024-11-06 14:03:59.275348] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:20.172 [2024-11-06 14:03:59.275456] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:20:20.172 [2024-11-06 14:03:59.275461] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:20.172 [2024-11-06 14:03:59.275468] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.275472] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.275476] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c14550) 00:20:20.172 [2024-11-06 14:03:59.275483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.172 [2024-11-06 14:03:59.275494] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76100, cid 0, qid 0 00:20:20.172 [2024-11-06 14:03:59.275653] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.172 [2024-11-06 14:03:59.275659] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.172 [2024-11-06 14:03:59.275663] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.275666] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76100) on tqpair=0x1c14550 00:20:20.172 [2024-11-06 14:03:59.275671] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:20.172 [2024-11-06 14:03:59.275680] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.275684] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.275688] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c14550) 00:20:20.172 [2024-11-06 14:03:59.275695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.172 [2024-11-06 14:03:59.275705] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76100, cid 0, qid 0 00:20:20.172 [2024-11-06 14:03:59.275866] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.172 [2024-11-06 14:03:59.275872] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.172 [2024-11-06 14:03:59.275876] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.275880] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76100) on tqpair=0x1c14550 00:20:20.172 [2024-11-06 14:03:59.275886] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:20.172 [2024-11-06 14:03:59.275891] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:20:20.172 [2024-11-06 14:03:59.275899] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:20:20.172 [2024-11-06 14:03:59.275906] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:20:20.172 [2024-11-06 14:03:59.275915] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.275918] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c14550) 00:20:20.172 [2024-11-06 14:03:59.275925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.172 [2024-11-06 14:03:59.275936] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76100, cid 0, qid 0 00:20:20.172 [2024-11-06 14:03:59.276200] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:20.172 [2024-11-06 14:03:59.276207] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:20.172 [2024-11-06 14:03:59.276210] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.276215] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c14550): datao=0, datal=4096, cccid=0 00:20:20.172 [2024-11-06 14:03:59.276219] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c76100) on tqpair(0x1c14550): expected_datao=0, payload_size=4096 00:20:20.172 [2024-11-06 14:03:59.276224] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.276231] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.276235] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.276407] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.172 [2024-11-06 14:03:59.276414] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.172 [2024-11-06 14:03:59.276417] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.276421] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76100) on tqpair=0x1c14550 00:20:20.172 [2024-11-06 14:03:59.276428] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:20:20.172 [2024-11-06 14:03:59.276433] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:20:20.172 [2024-11-06 14:03:59.276437] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:20:20.172 [2024-11-06 14:03:59.276446] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:20:20.172 [2024-11-06 14:03:59.276451] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:20:20.172 [2024-11-06 14:03:59.276456] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:20:20.172 [2024-11-06 14:03:59.276466] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:20:20.172 [2024-11-06 14:03:59.276472] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.276476] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.276480] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c14550) 00:20:20.172 [2024-11-06 14:03:59.276487] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:20.172 [2024-11-06 14:03:59.276498] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76100, cid 0, qid 0 00:20:20.172 [2024-11-06 14:03:59.276679] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.172 [2024-11-06 14:03:59.276686] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.172 [2024-11-06 14:03:59.276689] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.276693] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76100) on tqpair=0x1c14550 00:20:20.172 [2024-11-06 14:03:59.276700] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.276704] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.276707] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c14550) 00:20:20.172 [2024-11-06 14:03:59.276714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:20.172 [2024-11-06 14:03:59.276720] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.276723] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.276727] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c14550) 00:20:20.172 [2024-11-06 14:03:59.276733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:20.172 [2024-11-06 14:03:59.276739] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.276743] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.276746] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c14550) 00:20:20.172 [2024-11-06 14:03:59.276752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:20.172 [2024-11-06 14:03:59.276758] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.276762] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.276765] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c14550) 00:20:20.172 [2024-11-06 14:03:59.276771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:20.172 [2024-11-06 14:03:59.276776] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:20.172 [2024-11-06 14:03:59.276784] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:20.172 [2024-11-06 14:03:59.276790] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.276794] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c14550) 00:20:20.172 [2024-11-06 14:03:59.276801] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.172 [2024-11-06 14:03:59.276812] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76100, cid 0, qid 0 00:20:20.172 [2024-11-06 14:03:59.276818] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76280, cid 1, qid 0 00:20:20.172 [2024-11-06 14:03:59.276822] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76400, cid 2, qid 0 00:20:20.172 [2024-11-06 14:03:59.276827] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76580, cid 3, qid 0 00:20:20.172 [2024-11-06 14:03:59.276832] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76700, cid 4, qid 0 00:20:20.172 [2024-11-06 14:03:59.277073] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.172 [2024-11-06 14:03:59.277079] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.172 [2024-11-06 14:03:59.277083] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.277087] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76700) on tqpair=0x1c14550 00:20:20.172 [2024-11-06 14:03:59.277095] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:20:20.172 [2024-11-06 14:03:59.277101] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:20.172 [2024-11-06 14:03:59.277109] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:20:20.172 [2024-11-06 14:03:59.277115] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:20.172 [2024-11-06 14:03:59.277122] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.277126] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.172 [2024-11-06 14:03:59.277129] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c14550) 00:20:20.172 [2024-11-06 14:03:59.277136] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:20.172 [2024-11-06 14:03:59.277146] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76700, cid 4, qid 0 00:20:20.172 [2024-11-06 14:03:59.277357] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.172 [2024-11-06 14:03:59.277364] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.172 [2024-11-06 14:03:59.277367] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.277371] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76700) on tqpair=0x1c14550 00:20:20.173 [2024-11-06 14:03:59.277435] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:20:20.173 [2024-11-06 14:03:59.277445] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:20.173 [2024-11-06 14:03:59.277452] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.277456] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c14550) 00:20:20.173 [2024-11-06 14:03:59.277462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.173 [2024-11-06 14:03:59.277473] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76700, cid 4, qid 0 00:20:20.173 [2024-11-06 14:03:59.277673] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:20.173 [2024-11-06 14:03:59.277680] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:20.173 [2024-11-06 14:03:59.277683] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.277687] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c14550): datao=0, datal=4096, cccid=4 00:20:20.173 [2024-11-06 14:03:59.277692] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c76700) on tqpair(0x1c14550): expected_datao=0, payload_size=4096 00:20:20.173 [2024-11-06 14:03:59.277696] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.277707] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.277711] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.322256] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.173 [2024-11-06 14:03:59.322268] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.173 [2024-11-06 14:03:59.322272] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.322276] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76700) on tqpair=0x1c14550 00:20:20.173 [2024-11-06 14:03:59.322292] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:20:20.173 [2024-11-06 14:03:59.322302] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:20:20.173 [2024-11-06 14:03:59.322314] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:20:20.173 [2024-11-06 14:03:59.322321] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.322325] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c14550) 00:20:20.173 [2024-11-06 14:03:59.322332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.173 [2024-11-06 14:03:59.322345] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76700, cid 4, qid 0 00:20:20.173 [2024-11-06 14:03:59.322567] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:20.173 [2024-11-06 14:03:59.322574] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:20.173 [2024-11-06 14:03:59.322578] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.322582] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c14550): datao=0, datal=4096, cccid=4 00:20:20.173 [2024-11-06 14:03:59.322586] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c76700) on tqpair(0x1c14550): expected_datao=0, payload_size=4096 00:20:20.173 [2024-11-06 14:03:59.322591] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.322604] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.322608] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.363310] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.173 [2024-11-06 14:03:59.363319] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.173 [2024-11-06 14:03:59.363323] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.363327] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76700) on tqpair=0x1c14550 00:20:20.173 [2024-11-06 14:03:59.363339] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:20.173 [2024-11-06 14:03:59.363349] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:20.173 [2024-11-06 14:03:59.363356] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.363360] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c14550) 00:20:20.173 [2024-11-06 14:03:59.363367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.173 [2024-11-06 14:03:59.363378] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76700, cid 4, qid 0 00:20:20.173 [2024-11-06 14:03:59.363640] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:20.173 [2024-11-06 14:03:59.363647] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:20.173 [2024-11-06 14:03:59.363651] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.363654] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c14550): datao=0, datal=4096, cccid=4 00:20:20.173 [2024-11-06 14:03:59.363659] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c76700) on tqpair(0x1c14550): expected_datao=0, payload_size=4096 00:20:20.173 [2024-11-06 14:03:59.363663] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.363676] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.363680] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.406255] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.173 [2024-11-06 14:03:59.406266] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.173 [2024-11-06 14:03:59.406270] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.406274] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76700) on tqpair=0x1c14550 00:20:20.173 [2024-11-06 14:03:59.406284] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:20.173 [2024-11-06 14:03:59.406293] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:20:20.173 [2024-11-06 14:03:59.406302] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:20:20.173 [2024-11-06 14:03:59.406308] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:20.173 [2024-11-06 14:03:59.406314] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:20.173 [2024-11-06 14:03:59.406319] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:20:20.173 [2024-11-06 14:03:59.406324] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:20:20.173 [2024-11-06 14:03:59.406328] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:20:20.173 [2024-11-06 14:03:59.406334] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:20:20.173 [2024-11-06 14:03:59.406348] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.406352] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c14550) 00:20:20.173 [2024-11-06 14:03:59.406359] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.173 [2024-11-06 14:03:59.406366] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.406370] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.406373] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c14550) 00:20:20.173 [2024-11-06 14:03:59.406379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:20.173 [2024-11-06 14:03:59.406394] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76700, cid 4, qid 0 00:20:20.173 [2024-11-06 14:03:59.406400] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76880, cid 5, qid 0 00:20:20.173 [2024-11-06 14:03:59.406508] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.173 [2024-11-06 14:03:59.406514] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.173 [2024-11-06 14:03:59.406518] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.406522] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76700) on tqpair=0x1c14550 00:20:20.173 [2024-11-06 14:03:59.406528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.173 [2024-11-06 14:03:59.406534] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.173 [2024-11-06 14:03:59.406538] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.406542] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76880) on tqpair=0x1c14550 00:20:20.173 [2024-11-06 14:03:59.406551] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.406555] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c14550) 00:20:20.173 [2024-11-06 14:03:59.406561] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.173 [2024-11-06 14:03:59.406571] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76880, cid 5, qid 0 00:20:20.173 [2024-11-06 14:03:59.406751] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.173 [2024-11-06 14:03:59.406760] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.173 [2024-11-06 14:03:59.406764] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.406768] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76880) on tqpair=0x1c14550 00:20:20.173 [2024-11-06 14:03:59.406777] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.406780] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c14550) 00:20:20.173 [2024-11-06 14:03:59.406787] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.173 [2024-11-06 14:03:59.406797] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76880, cid 5, qid 0 00:20:20.173 [2024-11-06 14:03:59.407005] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.173 [2024-11-06 14:03:59.407012] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.173 [2024-11-06 14:03:59.407015] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.407019] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76880) on tqpair=0x1c14550 00:20:20.173 [2024-11-06 14:03:59.407028] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.407032] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c14550) 00:20:20.173 [2024-11-06 14:03:59.407038] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.173 [2024-11-06 14:03:59.407048] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76880, cid 5, qid 0 00:20:20.173 [2024-11-06 14:03:59.411252] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.173 [2024-11-06 14:03:59.411269] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.173 [2024-11-06 14:03:59.411273] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.411277] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76880) on tqpair=0x1c14550 00:20:20.173 [2024-11-06 14:03:59.411291] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.411295] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c14550) 00:20:20.173 [2024-11-06 14:03:59.411302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.173 [2024-11-06 14:03:59.411309] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.411313] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c14550) 00:20:20.173 [2024-11-06 14:03:59.411319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.173 [2024-11-06 14:03:59.411327] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.411330] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1c14550) 00:20:20.173 [2024-11-06 14:03:59.411336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.173 [2024-11-06 14:03:59.411344] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.411348] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c14550) 00:20:20.173 [2024-11-06 14:03:59.411354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.173 [2024-11-06 14:03:59.411366] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76880, cid 5, qid 0 00:20:20.173 [2024-11-06 14:03:59.411371] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76700, cid 4, qid 0 00:20:20.173 [2024-11-06 14:03:59.411378] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76a00, cid 6, qid 0 00:20:20.173 [2024-11-06 14:03:59.411383] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76b80, cid 7, qid 0 00:20:20.173 [2024-11-06 14:03:59.411631] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:20.173 [2024-11-06 14:03:59.411638] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:20.173 [2024-11-06 14:03:59.411641] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.411645] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c14550): datao=0, datal=8192, cccid=5 00:20:20.173 [2024-11-06 14:03:59.411650] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c76880) on tqpair(0x1c14550): expected_datao=0, payload_size=8192 00:20:20.173 [2024-11-06 14:03:59.411654] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.411754] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.411759] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.411765] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:20.173 [2024-11-06 14:03:59.411770] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:20.173 [2024-11-06 14:03:59.411774] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.411777] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c14550): datao=0, datal=512, cccid=4 00:20:20.173 [2024-11-06 14:03:59.411782] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c76700) on tqpair(0x1c14550): expected_datao=0, payload_size=512 00:20:20.173 [2024-11-06 14:03:59.411786] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.411792] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.411796] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:20.173 [2024-11-06 14:03:59.411802] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:20.173 [2024-11-06 14:03:59.411807] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:20.174 [2024-11-06 14:03:59.411811] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:20.174 [2024-11-06 14:03:59.411814] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c14550): datao=0, datal=512, cccid=6 00:20:20.174 [2024-11-06 14:03:59.411818] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c76a00) on tqpair(0x1c14550): expected_datao=0, payload_size=512 00:20:20.174 [2024-11-06 14:03:59.411823] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.174 [2024-11-06 14:03:59.411829] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:20.174 [2024-11-06 14:03:59.411832] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:20.174 [2024-11-06 14:03:59.411838] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:20.174 [2024-11-06 14:03:59.411844] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:20.174 [2024-11-06 14:03:59.411847] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:20.174 [2024-11-06 14:03:59.411851] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c14550): datao=0, datal=4096, cccid=7 00:20:20.174 [2024-11-06 14:03:59.411855] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c76b80) on tqpair(0x1c14550): expected_datao=0, payload_size=4096 00:20:20.174 [2024-11-06 14:03:59.411859] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.174 [2024-11-06 14:03:59.411866] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:20.174 [2024-11-06 14:03:59.411869] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:20.174 [2024-11-06 14:03:59.411877] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.174 [2024-11-06 14:03:59.411883] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.174 [2024-11-06 14:03:59.411886] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.174 [2024-11-06 14:03:59.411890] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76880) on tqpair=0x1c14550 00:20:20.174 [2024-11-06 14:03:59.411905] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.174 [2024-11-06 14:03:59.411912] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.174 [2024-11-06 14:03:59.411915] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.174 [2024-11-06 14:03:59.411919] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76700) on tqpair=0x1c14550 00:20:20.174 [2024-11-06 14:03:59.411929] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.174 [2024-11-06 14:03:59.411935] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.174 [2024-11-06 14:03:59.411938] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.174 [2024-11-06 14:03:59.411942] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76a00) on tqpair=0x1c14550 00:20:20.174 [2024-11-06 14:03:59.411949] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.174 [2024-11-06 14:03:59.411955] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.174 [2024-11-06 14:03:59.411958] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.174 [2024-11-06 14:03:59.411962] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76b80) on tqpair=0x1c14550 00:20:20.174 ===================================================== 00:20:20.174 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:20.174 ===================================================== 00:20:20.174 Controller Capabilities/Features 00:20:20.174 ================================ 00:20:20.174 Vendor ID: 8086 00:20:20.174 Subsystem Vendor ID: 8086 00:20:20.174 Serial Number: SPDK00000000000001 00:20:20.174 Model Number: SPDK bdev Controller 00:20:20.174 Firmware Version: 25.01 00:20:20.174 Recommended Arb Burst: 6 00:20:20.174 IEEE OUI Identifier: e4 d2 5c 00:20:20.174 Multi-path I/O 00:20:20.174 May have multiple subsystem ports: Yes 00:20:20.174 May have multiple controllers: Yes 00:20:20.174 Associated with SR-IOV VF: No 00:20:20.174 Max Data Transfer Size: 131072 00:20:20.174 Max Number of Namespaces: 32 00:20:20.174 Max Number of I/O Queues: 127 00:20:20.174 NVMe Specification Version (VS): 1.3 00:20:20.174 NVMe Specification Version (Identify): 1.3 00:20:20.174 Maximum Queue Entries: 128 00:20:20.174 Contiguous Queues Required: Yes 00:20:20.174 Arbitration Mechanisms Supported 00:20:20.174 Weighted Round Robin: Not Supported 00:20:20.174 Vendor Specific: Not Supported 00:20:20.174 Reset Timeout: 15000 ms 00:20:20.174 Doorbell Stride: 4 bytes 00:20:20.174 NVM Subsystem Reset: Not Supported 00:20:20.174 Command Sets Supported 00:20:20.174 NVM Command Set: Supported 00:20:20.174 Boot Partition: Not Supported 00:20:20.174 Memory Page Size Minimum: 4096 bytes 00:20:20.174 Memory Page Size Maximum: 4096 bytes 00:20:20.174 Persistent Memory Region: Not Supported 00:20:20.174 Optional Asynchronous Events Supported 00:20:20.174 Namespace Attribute Notices: Supported 00:20:20.174 Firmware Activation Notices: Not Supported 00:20:20.174 ANA Change Notices: Not Supported 00:20:20.174 PLE Aggregate Log Change Notices: Not Supported 00:20:20.174 LBA Status Info Alert Notices: Not Supported 00:20:20.174 EGE Aggregate Log Change Notices: Not Supported 00:20:20.174 Normal NVM Subsystem Shutdown event: Not Supported 00:20:20.174 Zone Descriptor Change Notices: Not Supported 00:20:20.174 Discovery Log Change Notices: Not Supported 00:20:20.174 Controller Attributes 00:20:20.174 128-bit Host Identifier: Supported 00:20:20.174 Non-Operational Permissive Mode: Not Supported 00:20:20.174 NVM Sets: Not Supported 00:20:20.174 Read Recovery Levels: Not Supported 00:20:20.174 Endurance Groups: Not Supported 00:20:20.174 Predictable Latency Mode: Not Supported 00:20:20.174 Traffic Based Keep ALive: Not Supported 00:20:20.174 Namespace Granularity: Not Supported 00:20:20.174 SQ Associations: Not Supported 00:20:20.174 UUID List: Not Supported 00:20:20.174 Multi-Domain Subsystem: Not Supported 00:20:20.174 Fixed Capacity Management: Not Supported 00:20:20.174 Variable Capacity Management: Not Supported 00:20:20.174 Delete Endurance Group: Not Supported 00:20:20.174 Delete NVM Set: Not Supported 00:20:20.174 Extended LBA Formats Supported: Not Supported 00:20:20.174 Flexible Data Placement Supported: Not Supported 00:20:20.174 00:20:20.174 Controller Memory Buffer Support 00:20:20.174 ================================ 00:20:20.174 Supported: No 00:20:20.174 00:20:20.174 Persistent Memory Region Support 00:20:20.174 ================================ 00:20:20.174 Supported: No 00:20:20.174 00:20:20.174 Admin Command Set Attributes 00:20:20.174 ============================ 00:20:20.174 Security Send/Receive: Not Supported 00:20:20.174 Format NVM: Not Supported 00:20:20.174 Firmware Activate/Download: Not Supported 00:20:20.174 Namespace Management: Not Supported 00:20:20.174 Device Self-Test: Not Supported 00:20:20.174 Directives: Not Supported 00:20:20.174 NVMe-MI: Not Supported 00:20:20.174 Virtualization Management: Not Supported 00:20:20.174 Doorbell Buffer Config: Not Supported 00:20:20.174 Get LBA Status Capability: Not Supported 00:20:20.174 Command & Feature Lockdown Capability: Not Supported 00:20:20.174 Abort Command Limit: 4 00:20:20.174 Async Event Request Limit: 4 00:20:20.174 Number of Firmware Slots: N/A 00:20:20.174 Firmware Slot 1 Read-Only: N/A 00:20:20.174 Firmware Activation Without Reset: N/A 00:20:20.174 Multiple Update Detection Support: N/A 00:20:20.174 Firmware Update Granularity: No Information Provided 00:20:20.174 Per-Namespace SMART Log: No 00:20:20.174 Asymmetric Namespace Access Log Page: Not Supported 00:20:20.174 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:20.174 Command Effects Log Page: Supported 00:20:20.174 Get Log Page Extended Data: Supported 00:20:20.174 Telemetry Log Pages: Not Supported 00:20:20.174 Persistent Event Log Pages: Not Supported 00:20:20.174 Supported Log Pages Log Page: May Support 00:20:20.174 Commands Supported & Effects Log Page: Not Supported 00:20:20.174 Feature Identifiers & Effects Log Page:May Support 00:20:20.174 NVMe-MI Commands & Effects Log Page: May Support 00:20:20.174 Data Area 4 for Telemetry Log: Not Supported 00:20:20.174 Error Log Page Entries Supported: 128 00:20:20.174 Keep Alive: Supported 00:20:20.174 Keep Alive Granularity: 10000 ms 00:20:20.174 00:20:20.174 NVM Command Set Attributes 00:20:20.174 ========================== 00:20:20.174 Submission Queue Entry Size 00:20:20.174 Max: 64 00:20:20.174 Min: 64 00:20:20.174 Completion Queue Entry Size 00:20:20.175 Max: 16 00:20:20.175 Min: 16 00:20:20.175 Number of Namespaces: 32 00:20:20.175 Compare Command: Supported 00:20:20.175 Write Uncorrectable Command: Not Supported 00:20:20.175 Dataset Management Command: Supported 00:20:20.175 Write Zeroes Command: Supported 00:20:20.175 Set Features Save Field: Not Supported 00:20:20.175 Reservations: Supported 00:20:20.175 Timestamp: Not Supported 00:20:20.175 Copy: Supported 00:20:20.175 Volatile Write Cache: Present 00:20:20.175 Atomic Write Unit (Normal): 1 00:20:20.175 Atomic Write Unit (PFail): 1 00:20:20.175 Atomic Compare & Write Unit: 1 00:20:20.175 Fused Compare & Write: Supported 00:20:20.175 Scatter-Gather List 00:20:20.175 SGL Command Set: Supported 00:20:20.175 SGL Keyed: Supported 00:20:20.175 SGL Bit Bucket Descriptor: Not Supported 00:20:20.175 SGL Metadata Pointer: Not Supported 00:20:20.175 Oversized SGL: Not Supported 00:20:20.175 SGL Metadata Address: Not Supported 00:20:20.175 SGL Offset: Supported 00:20:20.175 Transport SGL Data Block: Not Supported 00:20:20.175 Replay Protected Memory Block: Not Supported 00:20:20.175 00:20:20.175 Firmware Slot Information 00:20:20.175 ========================= 00:20:20.175 Active slot: 1 00:20:20.175 Slot 1 Firmware Revision: 25.01 00:20:20.175 00:20:20.175 00:20:20.175 Commands Supported and Effects 00:20:20.175 ============================== 00:20:20.175 Admin Commands 00:20:20.175 -------------- 00:20:20.175 Get Log Page (02h): Supported 00:20:20.175 Identify (06h): Supported 00:20:20.175 Abort (08h): Supported 00:20:20.175 Set Features (09h): Supported 00:20:20.175 Get Features (0Ah): Supported 00:20:20.175 Asynchronous Event Request (0Ch): Supported 00:20:20.175 Keep Alive (18h): Supported 00:20:20.175 I/O Commands 00:20:20.175 ------------ 00:20:20.175 Flush (00h): Supported LBA-Change 00:20:20.175 Write (01h): Supported LBA-Change 00:20:20.175 Read (02h): Supported 00:20:20.175 Compare (05h): Supported 00:20:20.175 Write Zeroes (08h): Supported LBA-Change 00:20:20.175 Dataset Management (09h): Supported LBA-Change 00:20:20.175 Copy (19h): Supported LBA-Change 00:20:20.175 00:20:20.175 Error Log 00:20:20.175 ========= 00:20:20.175 00:20:20.175 Arbitration 00:20:20.175 =========== 00:20:20.175 Arbitration Burst: 1 00:20:20.175 00:20:20.175 Power Management 00:20:20.175 ================ 00:20:20.175 Number of Power States: 1 00:20:20.175 Current Power State: Power State #0 00:20:20.175 Power State #0: 00:20:20.175 Max Power: 0.00 W 00:20:20.175 Non-Operational State: Operational 00:20:20.175 Entry Latency: Not Reported 00:20:20.175 Exit Latency: Not Reported 00:20:20.175 Relative Read Throughput: 0 00:20:20.175 Relative Read Latency: 0 00:20:20.175 Relative Write Throughput: 0 00:20:20.175 Relative Write Latency: 0 00:20:20.175 Idle Power: Not Reported 00:20:20.175 Active Power: Not Reported 00:20:20.175 Non-Operational Permissive Mode: Not Supported 00:20:20.175 00:20:20.175 Health Information 00:20:20.175 ================== 00:20:20.175 Critical Warnings: 00:20:20.175 Available Spare Space: OK 00:20:20.175 Temperature: OK 00:20:20.175 Device Reliability: OK 00:20:20.175 Read Only: No 00:20:20.175 Volatile Memory Backup: OK 00:20:20.175 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:20.175 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:20.175 Available Spare: 0% 00:20:20.175 Available Spare Threshold: 0% 00:20:20.175 Life Percentage Used:[2024-11-06 14:03:59.412059] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.175 [2024-11-06 14:03:59.412065] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c14550) 00:20:20.175 [2024-11-06 14:03:59.412072] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.175 [2024-11-06 14:03:59.412083] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76b80, cid 7, qid 0 00:20:20.175 [2024-11-06 14:03:59.412295] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.175 [2024-11-06 14:03:59.412302] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.175 [2024-11-06 14:03:59.412305] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.175 [2024-11-06 14:03:59.412309] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76b80) on tqpair=0x1c14550 00:20:20.175 [2024-11-06 14:03:59.412340] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:20:20.175 [2024-11-06 14:03:59.412349] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76100) on tqpair=0x1c14550 00:20:20.175 [2024-11-06 14:03:59.412356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.175 [2024-11-06 14:03:59.412361] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76280) on tqpair=0x1c14550 00:20:20.175 [2024-11-06 14:03:59.412366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.175 [2024-11-06 14:03:59.412371] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76400) on tqpair=0x1c14550 00:20:20.175 [2024-11-06 14:03:59.412375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.175 [2024-11-06 14:03:59.412380] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76580) on tqpair=0x1c14550 00:20:20.175 [2024-11-06 14:03:59.412385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.175 [2024-11-06 14:03:59.412393] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.175 [2024-11-06 14:03:59.412397] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.175 [2024-11-06 14:03:59.412400] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c14550) 00:20:20.175 [2024-11-06 14:03:59.412407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.175 [2024-11-06 14:03:59.412419] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76580, cid 3, qid 0 00:20:20.175 [2024-11-06 14:03:59.412648] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.175 [2024-11-06 14:03:59.412656] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.175 [2024-11-06 14:03:59.412660] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.175 [2024-11-06 14:03:59.412664] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76580) on tqpair=0x1c14550 00:20:20.175 [2024-11-06 14:03:59.412671] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.175 [2024-11-06 14:03:59.412674] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.175 [2024-11-06 14:03:59.412678] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c14550) 00:20:20.175 [2024-11-06 14:03:59.412685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.175 [2024-11-06 14:03:59.412698] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76580, cid 3, qid 0 00:20:20.175 [2024-11-06 14:03:59.412879] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.175 [2024-11-06 14:03:59.412886] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.175 [2024-11-06 14:03:59.412889] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.175 [2024-11-06 14:03:59.412893] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76580) on tqpair=0x1c14550 00:20:20.175 [2024-11-06 14:03:59.412898] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:20:20.175 [2024-11-06 14:03:59.412903] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:20:20.175 [2024-11-06 14:03:59.412912] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.175 [2024-11-06 14:03:59.412916] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.175 [2024-11-06 14:03:59.412919] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c14550) 00:20:20.175 [2024-11-06 14:03:59.412926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.175 [2024-11-06 14:03:59.412936] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76580, cid 3, qid 0 00:20:20.175 [2024-11-06 14:03:59.413145] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.175 [2024-11-06 14:03:59.413151] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.175 [2024-11-06 14:03:59.413155] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.175 [2024-11-06 14:03:59.413159] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76580) on tqpair=0x1c14550 00:20:20.175 [2024-11-06 14:03:59.413168] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.175 [2024-11-06 14:03:59.413172] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.175 [2024-11-06 14:03:59.413176] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c14550) 00:20:20.175 [2024-11-06 14:03:59.413182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.175 [2024-11-06 14:03:59.413192] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76580, cid 3, qid 0 00:20:20.175 [2024-11-06 14:03:59.413401] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.175 [2024-11-06 14:03:59.413408] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.175 [2024-11-06 14:03:59.413411] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.175 [2024-11-06 14:03:59.413415] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76580) on tqpair=0x1c14550 00:20:20.175 [2024-11-06 14:03:59.413425] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.175 [2024-11-06 14:03:59.413429] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.175 [2024-11-06 14:03:59.413432] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c14550) 00:20:20.175 [2024-11-06 14:03:59.413439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.175 [2024-11-06 14:03:59.413453] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76580, cid 3, qid 0 00:20:20.175 [2024-11-06 14:03:59.413612] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.175 [2024-11-06 14:03:59.413618] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.175 [2024-11-06 14:03:59.413622] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.175 [2024-11-06 14:03:59.413626] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76580) on tqpair=0x1c14550 00:20:20.175 [2024-11-06 14:03:59.413635] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.175 [2024-11-06 14:03:59.413639] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.175 [2024-11-06 14:03:59.413643] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c14550) 00:20:20.175 [2024-11-06 14:03:59.413649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.175 [2024-11-06 14:03:59.413659] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76580, cid 3, qid 0 00:20:20.175 [2024-11-06 14:03:59.413819] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.175 [2024-11-06 14:03:59.413825] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.175 [2024-11-06 14:03:59.413829] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.175 [2024-11-06 14:03:59.413832] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76580) on tqpair=0x1c14550 00:20:20.175 [2024-11-06 14:03:59.413842] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.175 [2024-11-06 14:03:59.413846] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.175 [2024-11-06 14:03:59.413849] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c14550) 00:20:20.175 [2024-11-06 14:03:59.413856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.175 [2024-11-06 14:03:59.413866] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76580, cid 3, qid 0 00:20:20.175 [2024-11-06 14:03:59.414074] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.175 [2024-11-06 14:03:59.414080] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.175 [2024-11-06 14:03:59.414084] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.175 [2024-11-06 14:03:59.414088] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76580) on tqpair=0x1c14550 00:20:20.175 [2024-11-06 14:03:59.414097] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.175 [2024-11-06 14:03:59.414101] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.175 [2024-11-06 14:03:59.414105] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c14550) 00:20:20.175 [2024-11-06 14:03:59.414111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.175 [2024-11-06 14:03:59.414121] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76580, cid 3, qid 0 00:20:20.175 [2024-11-06 14:03:59.414331] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.175 [2024-11-06 14:03:59.414338] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.175 [2024-11-06 14:03:59.414341] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.175 [2024-11-06 14:03:59.414345] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76580) on tqpair=0x1c14550 00:20:20.175 [2024-11-06 14:03:59.414355] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.175 [2024-11-06 14:03:59.414359] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.175 [2024-11-06 14:03:59.414362] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c14550) 00:20:20.175 [2024-11-06 14:03:59.414369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.175 [2024-11-06 14:03:59.414379] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76580, cid 3, qid 0 00:20:20.175 [2024-11-06 14:03:59.414590] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.175 [2024-11-06 14:03:59.414596] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.175 [2024-11-06 14:03:59.414599] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.175 [2024-11-06 14:03:59.414603] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76580) on tqpair=0x1c14550 00:20:20.175 [2024-11-06 14:03:59.414613] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.175 [2024-11-06 14:03:59.414617] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.175 [2024-11-06 14:03:59.414620] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c14550) 00:20:20.175 [2024-11-06 14:03:59.414627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.175 [2024-11-06 14:03:59.414637] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76580, cid 3, qid 0 00:20:20.175 [2024-11-06 14:03:59.414790] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.175 [2024-11-06 14:03:59.414796] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.175 [2024-11-06 14:03:59.414800] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.175 [2024-11-06 14:03:59.414804] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76580) on tqpair=0x1c14550 00:20:20.175 [2024-11-06 14:03:59.414813] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.175 [2024-11-06 14:03:59.414817] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.175 [2024-11-06 14:03:59.414820] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c14550) 00:20:20.176 [2024-11-06 14:03:59.414827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.176 [2024-11-06 14:03:59.414837] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76580, cid 3, qid 0 00:20:20.176 [2024-11-06 14:03:59.414996] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.176 [2024-11-06 14:03:59.415002] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.176 [2024-11-06 14:03:59.415005] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.176 [2024-11-06 14:03:59.415009] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76580) on tqpair=0x1c14550 00:20:20.176 [2024-11-06 14:03:59.415019] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.176 [2024-11-06 14:03:59.415023] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.176 [2024-11-06 14:03:59.415026] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c14550) 00:20:20.176 [2024-11-06 14:03:59.415033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.176 [2024-11-06 14:03:59.415043] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76580, cid 3, qid 0 00:20:20.176 [2024-11-06 14:03:59.419252] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.176 [2024-11-06 14:03:59.419260] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.176 [2024-11-06 14:03:59.419263] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.176 [2024-11-06 14:03:59.419267] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76580) on tqpair=0x1c14550 00:20:20.176 [2024-11-06 14:03:59.419277] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:20.176 [2024-11-06 14:03:59.419281] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:20.176 [2024-11-06 14:03:59.419285] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c14550) 00:20:20.176 [2024-11-06 14:03:59.419292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.176 [2024-11-06 14:03:59.419303] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c76580, cid 3, qid 0 00:20:20.176 [2024-11-06 14:03:59.419513] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:20.176 [2024-11-06 14:03:59.419522] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:20.176 [2024-11-06 14:03:59.419525] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:20.176 [2024-11-06 14:03:59.419529] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c76580) on tqpair=0x1c14550 00:20:20.176 [2024-11-06 14:03:59.419537] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:20:20.176 0% 00:20:20.176 Data Units Read: 0 00:20:20.176 Data Units Written: 0 00:20:20.176 Host Read Commands: 0 00:20:20.176 Host Write Commands: 0 00:20:20.176 Controller Busy Time: 0 minutes 00:20:20.176 Power Cycles: 0 00:20:20.176 Power On Hours: 0 hours 00:20:20.176 Unsafe Shutdowns: 0 00:20:20.176 Unrecoverable Media Errors: 0 00:20:20.176 Lifetime Error Log Entries: 0 00:20:20.176 Warning Temperature Time: 0 minutes 00:20:20.176 Critical Temperature Time: 0 minutes 00:20:20.176 00:20:20.176 Number of Queues 00:20:20.176 ================ 00:20:20.176 Number of I/O Submission Queues: 127 00:20:20.176 Number of I/O Completion Queues: 127 00:20:20.176 00:20:20.176 Active Namespaces 00:20:20.176 ================= 00:20:20.176 Namespace ID:1 00:20:20.176 Error Recovery Timeout: Unlimited 00:20:20.176 Command Set Identifier: NVM (00h) 00:20:20.176 Deallocate: Supported 00:20:20.176 Deallocated/Unwritten Error: Not Supported 00:20:20.176 Deallocated Read Value: Unknown 00:20:20.176 Deallocate in Write Zeroes: Not Supported 00:20:20.176 Deallocated Guard Field: 0xFFFF 00:20:20.176 Flush: Supported 00:20:20.176 Reservation: Supported 00:20:20.176 Namespace Sharing Capabilities: Multiple Controllers 00:20:20.176 Size (in LBAs): 131072 (0GiB) 00:20:20.176 Capacity (in LBAs): 131072 (0GiB) 00:20:20.176 Utilization (in LBAs): 131072 (0GiB) 00:20:20.176 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:20.176 EUI64: ABCDEF0123456789 00:20:20.176 UUID: 3ae78ebd-85a9-47ea-b1d4-4c1538fed642 00:20:20.176 Thin Provisioning: Not Supported 00:20:20.176 Per-NS Atomic Units: Yes 00:20:20.176 Atomic Boundary Size (Normal): 0 00:20:20.176 Atomic Boundary Size (PFail): 0 00:20:20.176 Atomic Boundary Offset: 0 00:20:20.176 Maximum Single Source Range Length: 65535 00:20:20.176 Maximum Copy Length: 65535 00:20:20.176 Maximum Source Range Count: 1 00:20:20.176 NGUID/EUI64 Never Reused: No 00:20:20.176 Namespace Write Protected: No 00:20:20.176 Number of LBA Formats: 1 00:20:20.176 Current LBA Format: LBA Format #00 00:20:20.176 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:20.176 00:20:20.176 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:20.176 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:20.176 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.176 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:20.435 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.435 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:20.435 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:20.435 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:20.435 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:20:20.435 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:20.435 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:20:20.435 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:20.435 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:20.435 rmmod nvme_tcp 00:20:20.435 rmmod nvme_fabrics 00:20:20.435 rmmod nvme_keyring 00:20:20.435 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:20.435 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:20:20.435 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:20:20.436 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 948775 ']' 00:20:20.436 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 948775 00:20:20.436 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 948775 ']' 00:20:20.436 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 948775 00:20:20.436 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:20:20.436 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:20.436 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 948775 00:20:20.436 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:20.436 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:20.436 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 948775' 00:20:20.436 killing process with pid 948775 00:20:20.436 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 948775 00:20:20.436 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 948775 00:20:20.436 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:20.436 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:20.436 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:20.436 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:20:20.436 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:20:20.436 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:20:20.436 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:20.436 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:20.436 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:20.436 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.436 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:20.436 14:03:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.974 14:04:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:22.974 00:20:22.974 real 0m9.319s 00:20:22.974 user 0m7.433s 00:20:22.974 sys 0m4.525s 00:20:22.974 14:04:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:22.974 14:04:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:22.974 ************************************ 00:20:22.974 END TEST nvmf_identify 00:20:22.974 ************************************ 00:20:22.974 14:04:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:22.974 14:04:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:22.974 14:04:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:22.974 14:04:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.974 ************************************ 00:20:22.974 START TEST nvmf_perf 00:20:22.974 ************************************ 00:20:22.974 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:22.974 * Looking for test storage... 00:20:22.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:22.974 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:22.974 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:20:22.974 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:22.974 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:22.974 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:22.974 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:22.974 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:22.974 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:20:22.974 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:20:22.974 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:20:22.974 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:20:22.974 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:22.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.975 --rc genhtml_branch_coverage=1 00:20:22.975 --rc genhtml_function_coverage=1 00:20:22.975 --rc genhtml_legend=1 00:20:22.975 --rc geninfo_all_blocks=1 00:20:22.975 --rc geninfo_unexecuted_blocks=1 00:20:22.975 00:20:22.975 ' 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:22.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.975 --rc genhtml_branch_coverage=1 00:20:22.975 --rc genhtml_function_coverage=1 00:20:22.975 --rc genhtml_legend=1 00:20:22.975 --rc geninfo_all_blocks=1 00:20:22.975 --rc geninfo_unexecuted_blocks=1 00:20:22.975 00:20:22.975 ' 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:22.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.975 --rc genhtml_branch_coverage=1 00:20:22.975 --rc genhtml_function_coverage=1 00:20:22.975 --rc genhtml_legend=1 00:20:22.975 --rc geninfo_all_blocks=1 00:20:22.975 --rc geninfo_unexecuted_blocks=1 00:20:22.975 00:20:22.975 ' 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:22.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.975 --rc genhtml_branch_coverage=1 00:20:22.975 --rc genhtml_function_coverage=1 00:20:22.975 --rc genhtml_legend=1 00:20:22.975 --rc geninfo_all_blocks=1 00:20:22.975 --rc geninfo_unexecuted_blocks=1 00:20:22.975 00:20:22.975 ' 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:22.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:22.975 14:04:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:28.251 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:28.251 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:28.251 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:28.251 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:28.251 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:28.251 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:28.251 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:28.251 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:20:28.251 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:28.251 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:20:28.251 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:20:28.251 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:20:28.251 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:20:28.251 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:20:28.251 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:28.251 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:28.251 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:28.251 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:28.252 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:28.252 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:28.252 Found net devices under 0000:31:00.0: cvl_0_0 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:28.252 Found net devices under 0000:31:00.1: cvl_0_1 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:28.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:28.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:20:28.252 00:20:28.252 --- 10.0.0.2 ping statistics --- 00:20:28.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.252 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:28.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:28.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:20:28.252 00:20:28.252 --- 10.0.0.1 ping statistics --- 00:20:28.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.252 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=953469 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 953469 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 953469 ']' 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:28.252 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:28.253 14:04:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:28.253 [2024-11-06 14:04:07.362503] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:20:28.253 [2024-11-06 14:04:07.362553] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.253 [2024-11-06 14:04:07.450377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:28.253 [2024-11-06 14:04:07.486370] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:28.253 [2024-11-06 14:04:07.486402] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:28.253 [2024-11-06 14:04:07.486410] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:28.253 [2024-11-06 14:04:07.486416] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:28.253 [2024-11-06 14:04:07.486423] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:28.253 [2024-11-06 14:04:07.487912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:28.253 [2024-11-06 14:04:07.488001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:28.253 [2024-11-06 14:04:07.488153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.253 [2024-11-06 14:04:07.488154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:29.191 14:04:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:29.191 14:04:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:20:29.191 14:04:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:29.191 14:04:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:29.191 14:04:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:29.191 14:04:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:29.191 14:04:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:20:29.191 14:04:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:20:29.450 14:04:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:20:29.450 14:04:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:29.709 14:04:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:20:29.709 14:04:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:29.709 14:04:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:29.709 14:04:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:20:29.709 14:04:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:29.709 14:04:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:29.709 14:04:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:29.969 [2024-11-06 14:04:09.118657] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:29.969 14:04:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:30.229 14:04:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:30.229 14:04:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:30.229 14:04:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:30.229 14:04:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:30.488 14:04:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:30.488 [2024-11-06 14:04:09.758450] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:30.747 14:04:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:30.747 14:04:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:20:30.747 14:04:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:20:30.747 14:04:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:30.747 14:04:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:20:32.125 Initializing NVMe Controllers 00:20:32.125 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:20:32.125 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:20:32.125 Initialization complete. Launching workers. 00:20:32.125 ======================================================== 00:20:32.125 Latency(us) 00:20:32.125 Device Information : IOPS MiB/s Average min max 00:20:32.125 PCIE (0000:65:00.0) NSID 1 from core 0: 95661.00 373.68 334.00 45.61 8153.72 00:20:32.125 ======================================================== 00:20:32.125 Total : 95661.00 373.68 334.00 45.61 8153.72 00:20:32.125 00:20:32.125 14:04:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:33.503 Initializing NVMe Controllers 00:20:33.503 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:33.503 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:33.503 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:33.503 Initialization complete. Launching workers. 00:20:33.503 ======================================================== 00:20:33.503 Latency(us) 00:20:33.503 Device Information : IOPS MiB/s Average min max 00:20:33.503 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 65.00 0.25 15578.47 167.25 46735.07 00:20:33.503 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 60.00 0.23 16743.61 6005.86 54867.62 00:20:33.503 ======================================================== 00:20:33.503 Total : 125.00 0.49 16137.74 167.25 54867.62 00:20:33.503 00:20:33.503 14:04:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:34.881 Initializing NVMe Controllers 00:20:34.881 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:34.881 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:34.881 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:34.881 Initialization complete. Launching workers. 00:20:34.881 ======================================================== 00:20:34.881 Latency(us) 00:20:34.881 Device Information : IOPS MiB/s Average min max 00:20:34.881 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12095.26 47.25 2645.72 401.55 8200.35 00:20:34.881 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3869.52 15.12 8271.16 3434.14 16028.24 00:20:34.881 ======================================================== 00:20:34.881 Total : 15964.78 62.36 4009.21 401.55 16028.24 00:20:34.881 00:20:34.881 14:04:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:20:34.881 14:04:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:20:34.881 14:04:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:37.569 Initializing NVMe Controllers 00:20:37.569 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:37.569 Controller IO queue size 128, less than required. 00:20:37.569 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:37.569 Controller IO queue size 128, less than required. 00:20:37.569 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:37.569 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:37.569 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:37.569 Initialization complete. Launching workers. 00:20:37.569 ======================================================== 00:20:37.569 Latency(us) 00:20:37.569 Device Information : IOPS MiB/s Average min max 00:20:37.569 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1850.25 462.56 69883.66 33672.00 119513.24 00:20:37.569 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 620.25 155.06 214605.65 57251.96 341016.75 00:20:37.569 ======================================================== 00:20:37.569 Total : 2470.50 617.63 106217.74 33672.00 341016.75 00:20:37.569 00:20:37.569 14:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:37.569 No valid NVMe controllers or AIO or URING devices found 00:20:37.569 Initializing NVMe Controllers 00:20:37.569 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:37.569 Controller IO queue size 128, less than required. 00:20:37.569 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:37.569 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:37.570 Controller IO queue size 128, less than required. 00:20:37.570 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:37.570 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:20:37.570 WARNING: Some requested NVMe devices were skipped 00:20:37.570 14:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:40.108 Initializing NVMe Controllers 00:20:40.108 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:40.108 Controller IO queue size 128, less than required. 00:20:40.108 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:40.108 Controller IO queue size 128, less than required. 00:20:40.108 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:40.108 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:40.108 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:40.108 Initialization complete. Launching workers. 00:20:40.108 00:20:40.108 ==================== 00:20:40.108 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:40.108 TCP transport: 00:20:40.108 polls: 42770 00:20:40.108 idle_polls: 26967 00:20:40.108 sock_completions: 15803 00:20:40.108 nvme_completions: 7105 00:20:40.108 submitted_requests: 10616 00:20:40.108 queued_requests: 1 00:20:40.108 00:20:40.108 ==================== 00:20:40.108 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:40.108 TCP transport: 00:20:40.108 polls: 46901 00:20:40.108 idle_polls: 30568 00:20:40.108 sock_completions: 16333 00:20:40.108 nvme_completions: 7165 00:20:40.108 submitted_requests: 10826 00:20:40.108 queued_requests: 1 00:20:40.108 ======================================================== 00:20:40.108 Latency(us) 00:20:40.108 Device Information : IOPS MiB/s Average min max 00:20:40.108 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1773.02 443.25 73895.63 35713.11 118124.13 00:20:40.108 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1787.99 447.00 72304.36 34636.26 122873.32 00:20:40.108 ======================================================== 00:20:40.108 Total : 3561.01 890.25 73096.65 34636.26 122873.32 00:20:40.109 00:20:40.109 14:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:20:40.109 14:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:40.368 14:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:20:40.368 14:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:40.368 14:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:20:40.368 14:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:40.368 14:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:20:40.368 14:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:40.368 14:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:20:40.368 14:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:40.368 14:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:40.368 rmmod nvme_tcp 00:20:40.368 rmmod nvme_fabrics 00:20:40.368 rmmod nvme_keyring 00:20:40.368 14:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:40.368 14:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:20:40.368 14:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:20:40.368 14:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 953469 ']' 00:20:40.368 14:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 953469 00:20:40.368 14:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 953469 ']' 00:20:40.368 14:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 953469 00:20:40.368 14:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:20:40.368 14:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:40.368 14:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 953469 00:20:40.368 14:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:40.368 14:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:40.368 14:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 953469' 00:20:40.368 killing process with pid 953469 00:20:40.368 14:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 953469 00:20:40.368 14:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 953469 00:20:42.273 14:04:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:42.273 14:04:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:42.273 14:04:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:42.273 14:04:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:20:42.273 14:04:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:20:42.273 14:04:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:20:42.273 14:04:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:42.273 14:04:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:42.273 14:04:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:42.273 14:04:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.273 14:04:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:42.273 14:04:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.815 14:04:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:44.815 00:20:44.815 real 0m21.715s 00:20:44.815 user 0m56.322s 00:20:44.815 sys 0m6.604s 00:20:44.815 14:04:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:44.815 14:04:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:44.815 ************************************ 00:20:44.815 END TEST nvmf_perf 00:20:44.815 ************************************ 00:20:44.815 14:04:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:44.815 14:04:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:44.815 14:04:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:44.815 14:04:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.815 ************************************ 00:20:44.815 START TEST nvmf_fio_host 00:20:44.815 ************************************ 00:20:44.815 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:44.815 * Looking for test storage... 00:20:44.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:44.815 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:44.815 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:20:44.815 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:44.815 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:44.815 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:44.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.816 --rc genhtml_branch_coverage=1 00:20:44.816 --rc genhtml_function_coverage=1 00:20:44.816 --rc genhtml_legend=1 00:20:44.816 --rc geninfo_all_blocks=1 00:20:44.816 --rc geninfo_unexecuted_blocks=1 00:20:44.816 00:20:44.816 ' 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:44.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.816 --rc genhtml_branch_coverage=1 00:20:44.816 --rc genhtml_function_coverage=1 00:20:44.816 --rc genhtml_legend=1 00:20:44.816 --rc geninfo_all_blocks=1 00:20:44.816 --rc geninfo_unexecuted_blocks=1 00:20:44.816 00:20:44.816 ' 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:44.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.816 --rc genhtml_branch_coverage=1 00:20:44.816 --rc genhtml_function_coverage=1 00:20:44.816 --rc genhtml_legend=1 00:20:44.816 --rc geninfo_all_blocks=1 00:20:44.816 --rc geninfo_unexecuted_blocks=1 00:20:44.816 00:20:44.816 ' 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:44.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.816 --rc genhtml_branch_coverage=1 00:20:44.816 --rc genhtml_function_coverage=1 00:20:44.816 --rc genhtml_legend=1 00:20:44.816 --rc geninfo_all_blocks=1 00:20:44.816 --rc geninfo_unexecuted_blocks=1 00:20:44.816 00:20:44.816 ' 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.816 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:44.817 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.817 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:20:44.817 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:44.817 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:44.817 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:44.817 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:44.817 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:44.817 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:44.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:44.817 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:44.817 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:44.817 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:44.817 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:44.817 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:20:44.817 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:44.817 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:44.817 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:44.817 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:44.817 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:44.817 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.817 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:44.817 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.817 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:44.817 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:44.817 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:20:44.817 14:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:50.097 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:50.097 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:50.097 Found net devices under 0000:31:00.0: cvl_0_0 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:50.097 Found net devices under 0000:31:00.1: cvl_0_1 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:50.097 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:50.098 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:50.098 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:50.098 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:50.098 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:50.098 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:50.098 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:50.098 14:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:50.098 14:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:50.098 14:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:50.098 14:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:50.098 14:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:50.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:50.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.599 ms 00:20:50.098 00:20:50.098 --- 10.0.0.2 ping statistics --- 00:20:50.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.098 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:20:50.098 14:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:50.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:50.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:20:50.098 00:20:50.098 --- 10.0.0.1 ping statistics --- 00:20:50.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.098 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:20:50.098 14:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:50.098 14:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:20:50.098 14:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:50.098 14:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:50.098 14:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:50.098 14:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:50.098 14:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:50.098 14:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:50.098 14:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:50.098 14:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:20:50.098 14:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:50.098 14:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:50.098 14:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.098 14:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=960877 00:20:50.098 14:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:50.098 14:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 960877 00:20:50.098 14:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 960877 ']' 00:20:50.098 14:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.098 14:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:50.098 14:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:50.098 14:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.098 14:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:50.098 14:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.098 [2024-11-06 14:04:29.149862] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:20:50.098 [2024-11-06 14:04:29.149926] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:50.098 [2024-11-06 14:04:29.231543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:50.098 [2024-11-06 14:04:29.271843] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:50.098 [2024-11-06 14:04:29.271879] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:50.098 [2024-11-06 14:04:29.271885] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:50.098 [2024-11-06 14:04:29.271890] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:50.098 [2024-11-06 14:04:29.271895] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:50.098 [2024-11-06 14:04:29.273651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:50.098 [2024-11-06 14:04:29.273783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:50.098 [2024-11-06 14:04:29.273949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.098 [2024-11-06 14:04:29.273951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:50.667 14:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:50.667 14:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:20:50.667 14:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:50.926 [2024-11-06 14:04:30.077508] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:50.926 14:04:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:20:50.926 14:04:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:50.926 14:04:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.926 14:04:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:51.185 Malloc1 00:20:51.185 14:04:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:51.185 14:04:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:51.444 14:04:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:51.703 [2024-11-06 14:04:30.749855] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:51.703 14:04:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:51.703 14:04:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:20:51.703 14:04:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:51.703 14:04:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:51.703 14:04:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:20:51.703 14:04:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:51.703 14:04:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:20:51.703 14:04:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:51.703 14:04:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:20:51.703 14:04:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:20:51.703 14:04:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:51.703 14:04:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:51.703 14:04:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:20:51.703 14:04:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:51.703 14:04:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:51.703 14:04:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:51.703 14:04:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:51.703 14:04:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:51.703 14:04:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:51.703 14:04:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:20:51.703 14:04:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:51.703 14:04:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:51.703 14:04:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:51.703 14:04:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:52.270 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:52.270 fio-3.35 00:20:52.270 Starting 1 thread 00:20:54.803 00:20:54.804 test: (groupid=0, jobs=1): err= 0: pid=961720: Wed Nov 6 14:04:33 2024 00:20:54.804 read: IOPS=13.9k, BW=54.3MiB/s (57.0MB/s)(109MiB/2004msec) 00:20:54.804 slat (nsec): min=1397, max=100517, avg=1830.58, stdev=904.21 00:20:54.804 clat (usec): min=1650, max=8977, avg=5090.63, stdev=351.20 00:20:54.804 lat (usec): min=1664, max=8979, avg=5092.46, stdev=351.14 00:20:54.804 clat percentiles (usec): 00:20:54.804 | 1.00th=[ 4293], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4817], 00:20:54.804 | 30.00th=[ 4948], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5145], 00:20:54.804 | 70.00th=[ 5276], 80.00th=[ 5342], 90.00th=[ 5538], 95.00th=[ 5604], 00:20:54.804 | 99.00th=[ 5866], 99.50th=[ 5997], 99.90th=[ 7635], 99.95th=[ 8094], 00:20:54.804 | 99.99th=[ 8717] 00:20:54.804 bw ( KiB/s): min=54842, max=55880, per=99.87%, avg=55560.50, stdev=490.92, samples=4 00:20:54.804 iops : min=13710, max=13970, avg=13890.00, stdev=122.97, samples=4 00:20:54.804 write: IOPS=13.9k, BW=54.3MiB/s (57.0MB/s)(109MiB/2004msec); 0 zone resets 00:20:54.804 slat (nsec): min=1422, max=92907, avg=1889.19, stdev=682.21 00:20:54.804 clat (usec): min=992, max=7989, avg=4085.23, stdev=290.43 00:20:54.804 lat (usec): min=1003, max=7990, avg=4087.12, stdev=290.39 00:20:54.804 clat percentiles (usec): 00:20:54.804 | 1.00th=[ 3425], 5.00th=[ 3654], 10.00th=[ 3752], 20.00th=[ 3884], 00:20:54.804 | 30.00th=[ 3949], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4146], 00:20:54.804 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4490], 00:20:54.804 | 99.00th=[ 4686], 99.50th=[ 4817], 99.90th=[ 5407], 99.95th=[ 6652], 00:20:54.804 | 99.99th=[ 7701] 00:20:54.804 bw ( KiB/s): min=55265, max=55880, per=99.97%, avg=55624.25, stdev=262.69, samples=4 00:20:54.804 iops : min=13816, max=13970, avg=13906.00, stdev=65.79, samples=4 00:20:54.804 lat (usec) : 1000=0.01% 00:20:54.804 lat (msec) : 2=0.04%, 4=18.59%, 10=81.37% 00:20:54.804 cpu : usr=73.89%, sys=25.06%, ctx=35, majf=0, minf=17 00:20:54.804 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:54.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.804 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:54.804 issued rwts: total=27872,27875,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.804 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:54.804 00:20:54.804 Run status group 0 (all jobs): 00:20:54.804 READ: bw=54.3MiB/s (57.0MB/s), 54.3MiB/s-54.3MiB/s (57.0MB/s-57.0MB/s), io=109MiB (114MB), run=2004-2004msec 00:20:54.804 WRITE: bw=54.3MiB/s (57.0MB/s), 54.3MiB/s-54.3MiB/s (57.0MB/s-57.0MB/s), io=109MiB (114MB), run=2004-2004msec 00:20:54.804 14:04:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:54.804 14:04:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:54.804 14:04:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:20:54.804 14:04:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:54.804 14:04:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:20:54.804 14:04:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:54.804 14:04:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:20:54.804 14:04:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:20:54.804 14:04:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:54.804 14:04:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:54.804 14:04:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:20:54.804 14:04:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:54.804 14:04:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:54.804 14:04:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:54.804 14:04:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:54.804 14:04:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:54.804 14:04:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:20:54.804 14:04:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:54.804 14:04:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:54.804 14:04:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:54.804 14:04:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:54.804 14:04:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:54.804 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:54.804 fio-3.35 00:20:54.804 Starting 1 thread 00:20:57.339 [2024-11-06 14:04:36.357023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffd590 is same with the state(6) to be set 00:20:57.339 00:20:57.339 test: (groupid=0, jobs=1): err= 0: pid=962468: Wed Nov 6 14:04:36 2024 00:20:57.339 read: IOPS=11.6k, BW=182MiB/s (190MB/s)(364MiB/2006msec) 00:20:57.339 slat (usec): min=2, max=109, avg= 2.74, stdev= 1.25 00:20:57.339 clat (usec): min=1212, max=49611, avg=6692.33, stdev=3425.72 00:20:57.339 lat (usec): min=1214, max=49613, avg=6695.07, stdev=3425.83 00:20:57.339 clat percentiles (usec): 00:20:57.339 | 1.00th=[ 3326], 5.00th=[ 4015], 10.00th=[ 4424], 20.00th=[ 5014], 00:20:57.339 | 30.00th=[ 5473], 40.00th=[ 5932], 50.00th=[ 6325], 60.00th=[ 6652], 00:20:57.339 | 70.00th=[ 7046], 80.00th=[ 7701], 90.00th=[ 8979], 95.00th=[10159], 00:20:57.339 | 99.00th=[12387], 99.50th=[43779], 99.90th=[48497], 99.95th=[49021], 00:20:57.339 | 99.99th=[49546] 00:20:57.339 bw ( KiB/s): min=85152, max=104224, per=50.81%, avg=94504.00, stdev=8577.61, samples=4 00:20:57.339 iops : min= 5322, max= 6514, avg=5906.50, stdev=536.10, samples=4 00:20:57.339 write: IOPS=7006, BW=109MiB/s (115MB/s)(192MiB/1753msec); 0 zone resets 00:20:57.339 slat (usec): min=27, max=150, avg=30.82, stdev= 5.73 00:20:57.339 clat (usec): min=3140, max=13528, avg=7442.21, stdev=1313.57 00:20:57.339 lat (usec): min=3168, max=13568, avg=7473.02, stdev=1316.26 00:20:57.339 clat percentiles (usec): 00:20:57.339 | 1.00th=[ 5080], 5.00th=[ 5669], 10.00th=[ 5932], 20.00th=[ 6325], 00:20:57.339 | 30.00th=[ 6652], 40.00th=[ 6915], 50.00th=[ 7242], 60.00th=[ 7570], 00:20:57.339 | 70.00th=[ 8029], 80.00th=[ 8586], 90.00th=[ 9241], 95.00th=[ 9896], 00:20:57.339 | 99.00th=[11076], 99.50th=[11469], 99.90th=[11994], 99.95th=[12518], 00:20:57.339 | 99.99th=[12649] 00:20:57.339 bw ( KiB/s): min=89632, max=107360, per=87.65%, avg=98264.00, stdev=8435.08, samples=4 00:20:57.339 iops : min= 5602, max= 6710, avg=6141.50, stdev=527.19, samples=4 00:20:57.339 lat (msec) : 2=0.04%, 4=3.32%, 10=91.87%, 20=4.42%, 50=0.36% 00:20:57.339 cpu : usr=85.84%, sys=12.77%, ctx=26, majf=0, minf=37 00:20:57.339 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:20:57.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:57.339 issued rwts: total=23319,12283,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.339 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:57.339 00:20:57.339 Run status group 0 (all jobs): 00:20:57.339 READ: bw=182MiB/s (190MB/s), 182MiB/s-182MiB/s (190MB/s-190MB/s), io=364MiB (382MB), run=2006-2006msec 00:20:57.339 WRITE: bw=109MiB/s (115MB/s), 109MiB/s-109MiB/s (115MB/s-115MB/s), io=192MiB (201MB), run=1753-1753msec 00:20:57.339 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:57.339 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:20:57.339 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:57.339 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:20:57.339 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:20:57.339 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:57.339 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:20:57.339 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:57.339 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:20:57.339 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:57.339 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:57.339 rmmod nvme_tcp 00:20:57.339 rmmod nvme_fabrics 00:20:57.339 rmmod nvme_keyring 00:20:57.339 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:57.339 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:20:57.339 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:20:57.339 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 960877 ']' 00:20:57.339 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 960877 00:20:57.339 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 960877 ']' 00:20:57.339 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 960877 00:20:57.339 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:20:57.339 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:57.339 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 960877 00:20:57.599 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:57.599 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:57.599 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 960877' 00:20:57.599 killing process with pid 960877 00:20:57.599 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 960877 00:20:57.599 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 960877 00:20:57.599 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:57.599 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:57.599 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:57.599 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:20:57.599 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:20:57.599 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:20:57.599 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:57.599 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:57.599 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:57.599 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.599 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:57.599 14:04:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.132 14:04:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:00.132 00:21:00.132 real 0m15.268s 00:21:00.132 user 1m0.704s 00:21:00.132 sys 0m5.874s 00:21:00.132 14:04:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:00.132 14:04:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.132 ************************************ 00:21:00.132 END TEST nvmf_fio_host 00:21:00.132 ************************************ 00:21:00.132 14:04:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:00.132 14:04:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:00.132 14:04:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:00.132 14:04:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.132 ************************************ 00:21:00.132 START TEST nvmf_failover 00:21:00.132 ************************************ 00:21:00.132 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:00.132 * Looking for test storage... 00:21:00.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:00.132 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:00.132 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:21:00.132 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:00.132 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:00.132 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:00.132 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:00.132 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:00.132 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:21:00.132 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:21:00.132 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:21:00.132 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:21:00.132 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:21:00.132 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:21:00.132 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:21:00.132 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:00.132 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:21:00.132 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:21:00.132 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:00.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.133 --rc genhtml_branch_coverage=1 00:21:00.133 --rc genhtml_function_coverage=1 00:21:00.133 --rc genhtml_legend=1 00:21:00.133 --rc geninfo_all_blocks=1 00:21:00.133 --rc geninfo_unexecuted_blocks=1 00:21:00.133 00:21:00.133 ' 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:00.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.133 --rc genhtml_branch_coverage=1 00:21:00.133 --rc genhtml_function_coverage=1 00:21:00.133 --rc genhtml_legend=1 00:21:00.133 --rc geninfo_all_blocks=1 00:21:00.133 --rc geninfo_unexecuted_blocks=1 00:21:00.133 00:21:00.133 ' 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:00.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.133 --rc genhtml_branch_coverage=1 00:21:00.133 --rc genhtml_function_coverage=1 00:21:00.133 --rc genhtml_legend=1 00:21:00.133 --rc geninfo_all_blocks=1 00:21:00.133 --rc geninfo_unexecuted_blocks=1 00:21:00.133 00:21:00.133 ' 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:00.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.133 --rc genhtml_branch_coverage=1 00:21:00.133 --rc genhtml_function_coverage=1 00:21:00.133 --rc genhtml_legend=1 00:21:00.133 --rc geninfo_all_blocks=1 00:21:00.133 --rc geninfo_unexecuted_blocks=1 00:21:00.133 00:21:00.133 ' 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:00.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:21:00.133 14:04:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:05.405 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:05.405 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:05.406 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:05.406 Found net devices under 0000:31:00.0: cvl_0_0 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:05.406 Found net devices under 0000:31:00.1: cvl_0_1 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:05.406 14:04:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:05.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:05.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.599 ms 00:21:05.406 00:21:05.406 --- 10.0.0.2 ping statistics --- 00:21:05.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.406 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:05.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:05.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:21:05.406 00:21:05.406 --- 10.0.0.1 ping statistics --- 00:21:05.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.406 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=967224 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 967224 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 967224 ']' 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:05.406 [2024-11-06 14:04:44.262378] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:21:05.406 [2024-11-06 14:04:44.262429] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:05.406 [2024-11-06 14:04:44.333309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:05.406 [2024-11-06 14:04:44.362824] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:05.406 [2024-11-06 14:04:44.362853] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:05.406 [2024-11-06 14:04:44.362859] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:05.406 [2024-11-06 14:04:44.362865] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:05.406 [2024-11-06 14:04:44.362869] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:05.406 [2024-11-06 14:04:44.363983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:05.406 [2024-11-06 14:04:44.364103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:05.406 [2024-11-06 14:04:44.364105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:05.406 [2024-11-06 14:04:44.603906] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:05.406 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:05.665 Malloc0 00:21:05.665 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:05.924 14:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:05.924 14:04:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:06.183 [2024-11-06 14:04:45.265249] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:06.183 14:04:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:06.183 [2024-11-06 14:04:45.421714] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:06.183 14:04:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:06.443 [2024-11-06 14:04:45.582157] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:06.443 14:04:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:06.443 14:04:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=967579 00:21:06.443 14:04:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:06.443 14:04:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 967579 /var/tmp/bdevperf.sock 00:21:06.443 14:04:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 967579 ']' 00:21:06.443 14:04:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:06.443 14:04:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:06.443 14:04:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:06.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:06.443 14:04:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:06.443 14:04:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:07.380 14:04:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:07.380 14:04:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:21:07.380 14:04:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:07.639 NVMe0n1 00:21:07.639 14:04:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:07.898 00:21:07.898 14:04:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=967920 00:21:07.898 14:04:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:07.898 14:04:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:09.275 14:04:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:09.275 [2024-11-06 14:04:48.295616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.275 [2024-11-06 14:04:48.295654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.275 [2024-11-06 14:04:48.295663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.275 [2024-11-06 14:04:48.295670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.275 [2024-11-06 14:04:48.295676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.275 [2024-11-06 14:04:48.295683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.275 [2024-11-06 14:04:48.295690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.275 [2024-11-06 14:04:48.295696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.275 [2024-11-06 14:04:48.295702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.275 [2024-11-06 14:04:48.295709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.275 [2024-11-06 14:04:48.295716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.275 [2024-11-06 14:04:48.295724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.275 [2024-11-06 14:04:48.295730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.275 [2024-11-06 14:04:48.295737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.275 [2024-11-06 14:04:48.295744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.275 [2024-11-06 14:04:48.295756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.275 [2024-11-06 14:04:48.295764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.275 [2024-11-06 14:04:48.295771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.275 [2024-11-06 14:04:48.295778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.275 [2024-11-06 14:04:48.295784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.275 [2024-11-06 14:04:48.295791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.275 [2024-11-06 14:04:48.295798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.275 [2024-11-06 14:04:48.295805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.275 [2024-11-06 14:04:48.295813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.275 [2024-11-06 14:04:48.295820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.275 [2024-11-06 14:04:48.295826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.275 [2024-11-06 14:04:48.295834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.295841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.295848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.295855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.295862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.295869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.295874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.295880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.295887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.295894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.295901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.295907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.295913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.295920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.295927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.295935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.295945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.295953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.295961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.295968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.295976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.295984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.295991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.295999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.296007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.296014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.296021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.296029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.296037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.296045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.296053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.296060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.296068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.296076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.296083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.296091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.296099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.296107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.296116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.296124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.296131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.296139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.296147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.296157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.296165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.296172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.296180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.296188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.296195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.296203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.296211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.296219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.296226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 [2024-11-06 14:04:48.296234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf5370 is same with the state(6) to be set 00:21:09.276 14:04:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:12.563 14:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:12.563 00:21:12.563 14:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:12.563 [2024-11-06 14:04:51.706651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.563 [2024-11-06 14:04:51.706685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.563 [2024-11-06 14:04:51.706691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.563 [2024-11-06 14:04:51.706696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.563 [2024-11-06 14:04:51.706701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.563 [2024-11-06 14:04:51.706706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.563 [2024-11-06 14:04:51.706710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.563 [2024-11-06 14:04:51.706715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.563 [2024-11-06 14:04:51.706720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.563 [2024-11-06 14:04:51.706724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.563 [2024-11-06 14:04:51.706729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.563 [2024-11-06 14:04:51.706733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.706994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.707000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.707005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.707009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.707014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.707018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.707023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.707028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.707032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.707037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.707041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.707046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.707051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.707055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.707060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.707065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.707069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.707074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.707078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.707083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.707087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.707092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.707096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.707101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.707105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.707110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.707114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.707119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.707123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.564 [2024-11-06 14:04:51.707129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.565 [2024-11-06 14:04:51.707134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.565 [2024-11-06 14:04:51.707138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.565 [2024-11-06 14:04:51.707143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.565 [2024-11-06 14:04:51.707147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.565 [2024-11-06 14:04:51.707152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6170 is same with the state(6) to be set 00:21:12.565 14:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:15.853 14:04:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:15.853 [2024-11-06 14:04:54.873911] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:15.853 14:04:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:16.788 14:04:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:16.788 [2024-11-06 14:04:56.042485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 [2024-11-06 14:04:56.042754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd41400 is same with the state(6) to be set 00:21:16.788 14:04:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 967920 00:21:23.360 { 00:21:23.360 "results": [ 00:21:23.360 { 00:21:23.360 "job": "NVMe0n1", 00:21:23.360 "core_mask": "0x1", 00:21:23.360 "workload": "verify", 00:21:23.360 "status": "finished", 00:21:23.360 "verify_range": { 00:21:23.360 "start": 0, 00:21:23.360 "length": 16384 00:21:23.360 }, 00:21:23.360 "queue_depth": 128, 00:21:23.360 "io_size": 4096, 00:21:23.360 "runtime": 15.004105, 00:21:23.360 "iops": 12927.862075078787, 00:21:23.360 "mibps": 50.49946123077651, 00:21:23.360 "io_failed": 6213, 00:21:23.360 "io_timeout": 0, 00:21:23.360 "avg_latency_us": 9572.883302561644, 00:21:23.360 "min_latency_us": 366.93333333333334, 00:21:23.360 "max_latency_us": 15837.866666666667 00:21:23.360 } 00:21:23.360 ], 00:21:23.360 "core_count": 1 00:21:23.360 } 00:21:23.360 14:05:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 967579 00:21:23.360 14:05:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 967579 ']' 00:21:23.360 14:05:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 967579 00:21:23.360 14:05:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:21:23.360 14:05:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:23.360 14:05:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 967579 00:21:23.360 14:05:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:23.360 14:05:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:23.360 14:05:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 967579' 00:21:23.360 killing process with pid 967579 00:21:23.360 14:05:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 967579 00:21:23.360 14:05:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 967579 00:21:23.360 14:05:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:23.360 [2024-11-06 14:04:45.633626] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:21:23.360 [2024-11-06 14:04:45.633682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid967579 ] 00:21:23.360 [2024-11-06 14:04:45.711817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.360 [2024-11-06 14:04:45.747935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.360 Running I/O for 15 seconds... 00:21:23.360 11210.00 IOPS, 43.79 MiB/s [2024-11-06T13:05:02.644Z] [2024-11-06 14:04:48.298956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.360 [2024-11-06 14:04:48.298991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.360 [2024-11-06 14:04:48.299008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.360 [2024-11-06 14:04:48.299017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.360 [2024-11-06 14:04:48.299027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.360 [2024-11-06 14:04:48.299035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.360 [2024-11-06 14:04:48.299045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.360 [2024-11-06 14:04:48.299053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.360 [2024-11-06 14:04:48.299063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.360 [2024-11-06 14:04:48.299071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.360 [2024-11-06 14:04:48.299080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.360 [2024-11-06 14:04:48.299088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.360 [2024-11-06 14:04:48.299097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.360 [2024-11-06 14:04:48.299105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.360 [2024-11-06 14:04:48.299114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.360 [2024-11-06 14:04:48.299121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.360 [2024-11-06 14:04:48.299131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.360 [2024-11-06 14:04:48.299138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.360 [2024-11-06 14:04:48.299148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.360 [2024-11-06 14:04:48.299155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.360 [2024-11-06 14:04:48.299164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.360 [2024-11-06 14:04:48.299172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.360 [2024-11-06 14:04:48.299187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.361 [2024-11-06 14:04:48.299194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.361 [2024-11-06 14:04:48.299204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.361 [2024-11-06 14:04:48.299211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.361 [2024-11-06 14:04:48.299221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.361 [2024-11-06 14:04:48.299228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.361 [2024-11-06 14:04:48.299238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.361 [2024-11-06 14:04:48.299250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.361 [2024-11-06 14:04:48.299260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.361 [2024-11-06 14:04:48.299268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.361 [2024-11-06 14:04:48.299278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.361 [2024-11-06 14:04:48.299285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.361 [2024-11-06 14:04:48.299295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.361 [2024-11-06 14:04:48.299302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.361 [2024-11-06 14:04:48.299311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.361 [2024-11-06 14:04:48.299319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.361 [2024-11-06 14:04:48.299328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.361 [2024-11-06 14:04:48.299337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.361 [2024-11-06 14:04:48.299347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.361 [2024-11-06 14:04:48.299354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.361 [2024-11-06 14:04:48.299363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.361 [2024-11-06 14:04:48.299371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.361 [2024-11-06 14:04:48.299380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.361 [2024-11-06 14:04:48.299387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.361 [2024-11-06 14:04:48.299397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.361 [2024-11-06 14:04:48.299407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.361 [2024-11-06 14:04:48.299416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.361 [2024-11-06 14:04:48.299423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.361 [2024-11-06 14:04:48.299432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.361 [2024-11-06 14:04:48.299440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.361 [2024-11-06 14:04:48.299449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.361 [2024-11-06 14:04:48.299457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.361 [2024-11-06 14:04:48.299466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.361 [2024-11-06 14:04:48.299473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.361 [2024-11-06 14:04:48.299483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.361 [2024-11-06 14:04:48.299490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.361 [2024-11-06 14:04:48.299499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.361 [2024-11-06 14:04:48.299507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.361 [2024-11-06 14:04:48.299516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.361 [2024-11-06 14:04:48.299523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.361 [2024-11-06 14:04:48.299532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.361 [2024-11-06 14:04:48.299540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.361 [2024-11-06 14:04:48.299550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.361 [2024-11-06 14:04:48.299557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.361 [2024-11-06 14:04:48.299566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.361 [2024-11-06 14:04:48.299574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.361 [2024-11-06 14:04:48.299583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.361 [2024-11-06 14:04:48.299591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.361 [2024-11-06 14:04:48.299600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.361 [2024-11-06 14:04:48.299607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.361 [2024-11-06 14:04:48.299618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.361 [2024-11-06 14:04:48.299625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.361 [2024-11-06 14:04:48.299634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.361 [2024-11-06 14:04:48.299642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.361 [2024-11-06 14:04:48.299651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.361 [2024-11-06 14:04:48.299658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.361 [2024-11-06 14:04:48.299668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.361 [2024-11-06 14:04:48.299675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.361 [2024-11-06 14:04:48.299684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.361 [2024-11-06 14:04:48.299691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.361 [2024-11-06 14:04:48.299700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.361 [2024-11-06 14:04:48.299708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.361 [2024-11-06 14:04:48.299717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.361 [2024-11-06 14:04:48.299724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.361 [2024-11-06 14:04:48.299733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.361 [2024-11-06 14:04:48.299741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.361 [2024-11-06 14:04:48.299750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.361 [2024-11-06 14:04:48.299758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.361 [2024-11-06 14:04:48.299767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.362 [2024-11-06 14:04:48.299774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.299783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.362 [2024-11-06 14:04:48.299790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.299799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.362 [2024-11-06 14:04:48.299807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.299816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.362 [2024-11-06 14:04:48.299824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.299836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.362 [2024-11-06 14:04:48.299843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.299852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.362 [2024-11-06 14:04:48.299859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.299869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.362 [2024-11-06 14:04:48.299876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.299886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.362 [2024-11-06 14:04:48.299893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.299903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.362 [2024-11-06 14:04:48.299910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.299919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.362 [2024-11-06 14:04:48.299927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.299936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.362 [2024-11-06 14:04:48.299943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.299952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.362 [2024-11-06 14:04:48.299960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.299969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.362 [2024-11-06 14:04:48.299976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.299986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.362 [2024-11-06 14:04:48.299993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.300002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.362 [2024-11-06 14:04:48.300010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.300019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.362 [2024-11-06 14:04:48.300027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.300036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.362 [2024-11-06 14:04:48.300045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.300054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.362 [2024-11-06 14:04:48.300061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.300071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.362 [2024-11-06 14:04:48.300079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.300088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.362 [2024-11-06 14:04:48.300096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.300105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.362 [2024-11-06 14:04:48.300113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.300122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.362 [2024-11-06 14:04:48.300129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.300138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.362 [2024-11-06 14:04:48.300146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.300155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.362 [2024-11-06 14:04:48.300162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.300171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.362 [2024-11-06 14:04:48.300178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.300188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.362 [2024-11-06 14:04:48.300195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.300204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.362 [2024-11-06 14:04:48.300211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.300220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.362 [2024-11-06 14:04:48.300227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.300237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.362 [2024-11-06 14:04:48.300247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.300257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.362 [2024-11-06 14:04:48.300265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.300274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.362 [2024-11-06 14:04:48.300281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.300291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.362 [2024-11-06 14:04:48.300298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.300307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.362 [2024-11-06 14:04:48.300314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.300324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.362 [2024-11-06 14:04:48.300331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.300340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.362 [2024-11-06 14:04:48.300347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.300356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.362 [2024-11-06 14:04:48.300364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.300373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.362 [2024-11-06 14:04:48.300380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.300389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.362 [2024-11-06 14:04:48.300397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.362 [2024-11-06 14:04:48.300406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.363 [2024-11-06 14:04:48.300414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.363 [2024-11-06 14:04:48.300423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.363 [2024-11-06 14:04:48.300430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.363 [2024-11-06 14:04:48.300440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.363 [2024-11-06 14:04:48.300447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.363 [2024-11-06 14:04:48.300456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.363 [2024-11-06 14:04:48.300465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.363 [2024-11-06 14:04:48.300475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.363 [2024-11-06 14:04:48.300482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.363 [2024-11-06 14:04:48.300491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.363 [2024-11-06 14:04:48.300499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.363 [2024-11-06 14:04:48.300509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.363 [2024-11-06 14:04:48.300516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.363 [2024-11-06 14:04:48.300525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.363 [2024-11-06 14:04:48.300532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.363 [2024-11-06 14:04:48.300542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.363 [2024-11-06 14:04:48.300549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.363 [2024-11-06 14:04:48.300569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.363 [2024-11-06 14:04:48.300577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97496 len:8 PRP1 0x0 PRP2 0x0 00:21:23.363 [2024-11-06 14:04:48.300584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.363 [2024-11-06 14:04:48.300595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.363 [2024-11-06 14:04:48.300600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.363 [2024-11-06 14:04:48.300607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97504 len:8 PRP1 0x0 PRP2 0x0 00:21:23.363 [2024-11-06 14:04:48.300614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.363 [2024-11-06 14:04:48.300622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.363 [2024-11-06 14:04:48.300627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.363 [2024-11-06 14:04:48.300633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97512 len:8 PRP1 0x0 PRP2 0x0 00:21:23.363 [2024-11-06 14:04:48.300640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.363 [2024-11-06 14:04:48.300648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.363 [2024-11-06 14:04:48.300654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.363 [2024-11-06 14:04:48.300660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97520 len:8 PRP1 0x0 PRP2 0x0 00:21:23.363 [2024-11-06 14:04:48.300667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.363 [2024-11-06 14:04:48.300674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.363 [2024-11-06 14:04:48.300680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.363 [2024-11-06 14:04:48.300686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97528 len:8 PRP1 0x0 PRP2 0x0 00:21:23.363 [2024-11-06 14:04:48.300695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.363 [2024-11-06 14:04:48.300703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.363 [2024-11-06 14:04:48.300708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.363 [2024-11-06 14:04:48.300714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97536 len:8 PRP1 0x0 PRP2 0x0 00:21:23.363 [2024-11-06 14:04:48.300721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.363 [2024-11-06 14:04:48.300729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.363 [2024-11-06 14:04:48.300735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.363 [2024-11-06 14:04:48.300741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97544 len:8 PRP1 0x0 PRP2 0x0 00:21:23.363 [2024-11-06 14:04:48.300748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.363 [2024-11-06 14:04:48.300755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.363 [2024-11-06 14:04:48.300761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.363 [2024-11-06 14:04:48.300767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97552 len:8 PRP1 0x0 PRP2 0x0 00:21:23.363 [2024-11-06 14:04:48.300775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.363 [2024-11-06 14:04:48.300782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.363 [2024-11-06 14:04:48.300788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.363 [2024-11-06 14:04:48.300794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97560 len:8 PRP1 0x0 PRP2 0x0 00:21:23.363 [2024-11-06 14:04:48.300801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.363 [2024-11-06 14:04:48.300808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.363 [2024-11-06 14:04:48.300814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.363 [2024-11-06 14:04:48.300820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97568 len:8 PRP1 0x0 PRP2 0x0 00:21:23.363 [2024-11-06 14:04:48.300827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.363 [2024-11-06 14:04:48.300835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.363 [2024-11-06 14:04:48.300840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.363 [2024-11-06 14:04:48.300846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97576 len:8 PRP1 0x0 PRP2 0x0 00:21:23.363 [2024-11-06 14:04:48.300853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.363 [2024-11-06 14:04:48.300861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.363 [2024-11-06 14:04:48.300867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.363 [2024-11-06 14:04:48.300873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97584 len:8 PRP1 0x0 PRP2 0x0 00:21:23.363 [2024-11-06 14:04:48.300881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.363 [2024-11-06 14:04:48.300888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.363 [2024-11-06 14:04:48.300894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.363 [2024-11-06 14:04:48.300902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97592 len:8 PRP1 0x0 PRP2 0x0 00:21:23.363 [2024-11-06 14:04:48.300910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.363 [2024-11-06 14:04:48.300918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.363 [2024-11-06 14:04:48.300923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.363 [2024-11-06 14:04:48.300929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97600 len:8 PRP1 0x0 PRP2 0x0 00:21:23.363 [2024-11-06 14:04:48.300937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.363 [2024-11-06 14:04:48.300944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.363 [2024-11-06 14:04:48.300950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.363 [2024-11-06 14:04:48.300956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97608 len:8 PRP1 0x0 PRP2 0x0 00:21:23.363 [2024-11-06 14:04:48.300963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.363 [2024-11-06 14:04:48.300971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.363 [2024-11-06 14:04:48.300977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.363 [2024-11-06 14:04:48.300983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97616 len:8 PRP1 0x0 PRP2 0x0 00:21:23.363 [2024-11-06 14:04:48.300990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.363 [2024-11-06 14:04:48.300998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.363 [2024-11-06 14:04:48.301003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.363 [2024-11-06 14:04:48.301009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97624 len:8 PRP1 0x0 PRP2 0x0 00:21:23.363 [2024-11-06 14:04:48.301016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.363 [2024-11-06 14:04:48.301024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.363 [2024-11-06 14:04:48.301030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.363 [2024-11-06 14:04:48.301035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97632 len:8 PRP1 0x0 PRP2 0x0 00:21:23.364 [2024-11-06 14:04:48.301043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.364 [2024-11-06 14:04:48.301050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.364 [2024-11-06 14:04:48.301056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.364 [2024-11-06 14:04:48.301062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97640 len:8 PRP1 0x0 PRP2 0x0 00:21:23.364 [2024-11-06 14:04:48.301069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.364 [2024-11-06 14:04:48.301077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.364 [2024-11-06 14:04:48.301083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.364 [2024-11-06 14:04:48.301088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97648 len:8 PRP1 0x0 PRP2 0x0 00:21:23.364 [2024-11-06 14:04:48.301096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.364 [2024-11-06 14:04:48.301104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.364 [2024-11-06 14:04:48.301110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.364 [2024-11-06 14:04:48.301117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97656 len:8 PRP1 0x0 PRP2 0x0 00:21:23.364 [2024-11-06 14:04:48.301124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.364 [2024-11-06 14:04:48.301132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.364 [2024-11-06 14:04:48.301137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.364 [2024-11-06 14:04:48.301143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97664 len:8 PRP1 0x0 PRP2 0x0 00:21:23.364 [2024-11-06 14:04:48.301150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.364 [2024-11-06 14:04:48.301158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.364 [2024-11-06 14:04:48.301163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.364 [2024-11-06 14:04:48.301170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97672 len:8 PRP1 0x0 PRP2 0x0 00:21:23.364 [2024-11-06 14:04:48.301177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.364 [2024-11-06 14:04:48.301185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.364 [2024-11-06 14:04:48.301190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.364 [2024-11-06 14:04:48.301196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97680 len:8 PRP1 0x0 PRP2 0x0 00:21:23.364 [2024-11-06 14:04:48.301203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.364 [2024-11-06 14:04:48.301211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.364 [2024-11-06 14:04:48.301216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.364 [2024-11-06 14:04:48.301222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97688 len:8 PRP1 0x0 PRP2 0x0 00:21:23.364 [2024-11-06 14:04:48.301229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.364 [2024-11-06 14:04:48.301236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.364 [2024-11-06 14:04:48.301242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.364 [2024-11-06 14:04:48.301251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97696 len:8 PRP1 0x0 PRP2 0x0 00:21:23.364 [2024-11-06 14:04:48.301259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.364 [2024-11-06 14:04:48.301267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.364 [2024-11-06 14:04:48.301273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.364 [2024-11-06 14:04:48.301279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97704 len:8 PRP1 0x0 PRP2 0x0 00:21:23.364 [2024-11-06 14:04:48.301286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.364 [2024-11-06 14:04:48.301293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.364 [2024-11-06 14:04:48.301299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.364 [2024-11-06 14:04:48.301305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97712 len:8 PRP1 0x0 PRP2 0x0 00:21:23.364 [2024-11-06 14:04:48.301314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.364 [2024-11-06 14:04:48.301322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.364 [2024-11-06 14:04:48.301328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.364 [2024-11-06 14:04:48.301334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97720 len:8 PRP1 0x0 PRP2 0x0 00:21:23.364 [2024-11-06 14:04:48.301342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.364 [2024-11-06 14:04:48.301350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.364 [2024-11-06 14:04:48.301355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.364 [2024-11-06 14:04:48.301361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97728 len:8 PRP1 0x0 PRP2 0x0 00:21:23.364 [2024-11-06 14:04:48.301369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.364 [2024-11-06 14:04:48.301376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.364 [2024-11-06 14:04:48.301382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.364 [2024-11-06 14:04:48.301388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97736 len:8 PRP1 0x0 PRP2 0x0 00:21:23.364 [2024-11-06 14:04:48.301395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.364 [2024-11-06 14:04:48.301403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.364 [2024-11-06 14:04:48.301408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.364 [2024-11-06 14:04:48.301414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97744 len:8 PRP1 0x0 PRP2 0x0 00:21:23.364 [2024-11-06 14:04:48.301421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.364 [2024-11-06 14:04:48.301429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.364 [2024-11-06 14:04:48.301435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.364 [2024-11-06 14:04:48.301441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97752 len:8 PRP1 0x0 PRP2 0x0 00:21:23.364 [2024-11-06 14:04:48.301448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.364 [2024-11-06 14:04:48.301455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.364 [2024-11-06 14:04:48.301461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.364 [2024-11-06 14:04:48.301467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97760 len:8 PRP1 0x0 PRP2 0x0 00:21:23.364 [2024-11-06 14:04:48.301474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.364 [2024-11-06 14:04:48.301481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.364 [2024-11-06 14:04:48.301487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.364 [2024-11-06 14:04:48.301492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97768 len:8 PRP1 0x0 PRP2 0x0 00:21:23.364 [2024-11-06 14:04:48.301500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.364 [2024-11-06 14:04:48.301507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.364 [2024-11-06 14:04:48.301513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.364 [2024-11-06 14:04:48.301520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97776 len:8 PRP1 0x0 PRP2 0x0 00:21:23.364 [2024-11-06 14:04:48.301527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.364 [2024-11-06 14:04:48.301573] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:23.364 [2024-11-06 14:04:48.301595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:23.364 [2024-11-06 14:04:48.301604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.364 [2024-11-06 14:04:48.301612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:23.364 [2024-11-06 14:04:48.301619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.364 [2024-11-06 14:04:48.301627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:23.364 [2024-11-06 14:04:48.301635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.364 [2024-11-06 14:04:48.301643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:23.364 [2024-11-06 14:04:48.301650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.364 [2024-11-06 14:04:48.301657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:23.364 [2024-11-06 14:04:48.301685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x242ad80 (9): Bad file descriptor 00:21:23.365 [2024-11-06 14:04:48.305212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:23.365 [2024-11-06 14:04:48.328177] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:21:23.365 11819.50 IOPS, 46.17 MiB/s [2024-11-06T13:05:02.649Z] 12189.33 IOPS, 47.61 MiB/s [2024-11-06T13:05:02.649Z] 12398.25 IOPS, 48.43 MiB/s [2024-11-06T13:05:02.649Z] [2024-11-06 14:04:51.708142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:61120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.365 [2024-11-06 14:04:51.708171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.365 [2024-11-06 14:04:51.708183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:61312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.365 [2024-11-06 14:04:51.708189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.365 [2024-11-06 14:04:51.708196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.365 [2024-11-06 14:04:51.708201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.365 [2024-11-06 14:04:51.708208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:61328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.365 [2024-11-06 14:04:51.708213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.365 [2024-11-06 14:04:51.708220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.365 [2024-11-06 14:04:51.708225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.365 [2024-11-06 14:04:51.708231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.365 [2024-11-06 14:04:51.708240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.365 [2024-11-06 14:04:51.708250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.365 [2024-11-06 14:04:51.708256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.365 [2024-11-06 14:04:51.708262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:61360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.365 [2024-11-06 14:04:51.708267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.365 [2024-11-06 14:04:51.708274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:61368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.365 [2024-11-06 14:04:51.708279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.365 [2024-11-06 14:04:51.708285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.365 [2024-11-06 14:04:51.708291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.365 [2024-11-06 14:04:51.708297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.365 [2024-11-06 14:04:51.708303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.365 [2024-11-06 14:04:51.708309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.365 [2024-11-06 14:04:51.708314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.365 [2024-11-06 14:04:51.708320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.365 [2024-11-06 14:04:51.708325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.365 [2024-11-06 14:04:51.708332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.365 [2024-11-06 14:04:51.708337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.365 [2024-11-06 14:04:51.708343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.365 [2024-11-06 14:04:51.708349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.365 [2024-11-06 14:04:51.708355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.365 [2024-11-06 14:04:51.708360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.365 [2024-11-06 14:04:51.708366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.365 [2024-11-06 14:04:51.708371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.365 [2024-11-06 14:04:51.708378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.365 [2024-11-06 14:04:51.708383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.365 [2024-11-06 14:04:51.708391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.365 [2024-11-06 14:04:51.708396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.365 [2024-11-06 14:04:51.708403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.365 [2024-11-06 14:04:51.708408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.365 [2024-11-06 14:04:51.708414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.365 [2024-11-06 14:04:51.708419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.365 [2024-11-06 14:04:51.708426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.365 [2024-11-06 14:04:51.708431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.365 [2024-11-06 14:04:51.708437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.365 [2024-11-06 14:04:51.708443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.365 [2024-11-06 14:04:51.708449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.365 [2024-11-06 14:04:51.708454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.365 [2024-11-06 14:04:51.708460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.365 [2024-11-06 14:04:51.708466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.365 [2024-11-06 14:04:51.708472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:61504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.365 [2024-11-06 14:04:51.708477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.365 [2024-11-06 14:04:51.708484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.365 [2024-11-06 14:04:51.708489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.365 [2024-11-06 14:04:51.708495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:61536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:61808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.366 [2024-11-06 14:04:51.708944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.366 [2024-11-06 14:04:51.708951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.367 [2024-11-06 14:04:51.708956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.708962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.367 [2024-11-06 14:04:51.708967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.708974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.367 [2024-11-06 14:04:51.708983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.708989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.367 [2024-11-06 14:04:51.708994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.709001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.367 [2024-11-06 14:04:51.709006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.709013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.367 [2024-11-06 14:04:51.709018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.709025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:61128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.367 [2024-11-06 14:04:51.709030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.709036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:61136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.367 [2024-11-06 14:04:51.709041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.709048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:61144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.367 [2024-11-06 14:04:51.709053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.709060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:61152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.367 [2024-11-06 14:04:51.709065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.709071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:61160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.367 [2024-11-06 14:04:51.709076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.709083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.367 [2024-11-06 14:04:51.709087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.709094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.367 [2024-11-06 14:04:51.709099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.709105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.367 [2024-11-06 14:04:51.709110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.709116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.367 [2024-11-06 14:04:51.709121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.709129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.367 [2024-11-06 14:04:51.709134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.709140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.367 [2024-11-06 14:04:51.709146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.709152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.367 [2024-11-06 14:04:51.709157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.709163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.367 [2024-11-06 14:04:51.709169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.709175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.367 [2024-11-06 14:04:51.709180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.709186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.367 [2024-11-06 14:04:51.709191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.709197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.367 [2024-11-06 14:04:51.709202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.709208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.367 [2024-11-06 14:04:51.709214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.709221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.367 [2024-11-06 14:04:51.709226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.709232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:61968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.367 [2024-11-06 14:04:51.709237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.709245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:61976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.367 [2024-11-06 14:04:51.709251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.709257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.367 [2024-11-06 14:04:51.709262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.709269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.367 [2024-11-06 14:04:51.709274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.709281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.367 [2024-11-06 14:04:51.709286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.709293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.367 [2024-11-06 14:04:51.709298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.709304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.367 [2024-11-06 14:04:51.709309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.709316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:62024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.367 [2024-11-06 14:04:51.709321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.709328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:62032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.367 [2024-11-06 14:04:51.709333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.709339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.367 [2024-11-06 14:04:51.709344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.709351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.367 [2024-11-06 14:04:51.709355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.709362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.367 [2024-11-06 14:04:51.709367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.709373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.367 [2024-11-06 14:04:51.709378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.709385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:62072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.367 [2024-11-06 14:04:51.709389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.709396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:62080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.367 [2024-11-06 14:04:51.709401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.367 [2024-11-06 14:04:51.709407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:62088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.367 [2024-11-06 14:04:51.709412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.368 [2024-11-06 14:04:51.709419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:62096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.368 [2024-11-06 14:04:51.709425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.368 [2024-11-06 14:04:51.709431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.368 [2024-11-06 14:04:51.709436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.368 [2024-11-06 14:04:51.709443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.368 [2024-11-06 14:04:51.709447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.368 [2024-11-06 14:04:51.709454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:62120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.368 [2024-11-06 14:04:51.709459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.368 [2024-11-06 14:04:51.709465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:62128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.368 [2024-11-06 14:04:51.709470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.368 [2024-11-06 14:04:51.709486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.368 [2024-11-06 14:04:51.709491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62136 len:8 PRP1 0x0 PRP2 0x0 00:21:23.368 [2024-11-06 14:04:51.709496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.368 [2024-11-06 14:04:51.709505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.368 [2024-11-06 14:04:51.709509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.368 [2024-11-06 14:04:51.709513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61184 len:8 PRP1 0x0 PRP2 0x0 00:21:23.368 [2024-11-06 14:04:51.709518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.368 [2024-11-06 14:04:51.709524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.368 [2024-11-06 14:04:51.709528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.368 [2024-11-06 14:04:51.709532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61192 len:8 PRP1 0x0 PRP2 0x0 00:21:23.368 [2024-11-06 14:04:51.709537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.368 [2024-11-06 14:04:51.709542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.368 [2024-11-06 14:04:51.709546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.368 [2024-11-06 14:04:51.709550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61200 len:8 PRP1 0x0 PRP2 0x0 00:21:23.368 [2024-11-06 14:04:51.709555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.368 [2024-11-06 14:04:51.709560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.368 [2024-11-06 14:04:51.709564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.368 [2024-11-06 14:04:51.709569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61208 len:8 PRP1 0x0 PRP2 0x0 00:21:23.368 [2024-11-06 14:04:51.709574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.368 [2024-11-06 14:04:51.709580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.368 [2024-11-06 14:04:51.709584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.368 [2024-11-06 14:04:51.709589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61216 len:8 PRP1 0x0 PRP2 0x0 00:21:23.368 [2024-11-06 14:04:51.709593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.368 [2024-11-06 14:04:51.709599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.368 [2024-11-06 14:04:51.709603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.368 [2024-11-06 14:04:51.709607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61224 len:8 PRP1 0x0 PRP2 0x0 00:21:23.368 [2024-11-06 14:04:51.709612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.368 [2024-11-06 14:04:51.709618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.368 [2024-11-06 14:04:51.709622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.368 [2024-11-06 14:04:51.709626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61232 len:8 PRP1 0x0 PRP2 0x0 00:21:23.368 [2024-11-06 14:04:51.709630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.368 [2024-11-06 14:04:51.709636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.368 [2024-11-06 14:04:51.709639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.368 [2024-11-06 14:04:51.709644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61240 len:8 PRP1 0x0 PRP2 0x0 00:21:23.368 [2024-11-06 14:04:51.709649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.368 [2024-11-06 14:04:51.709654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.368 [2024-11-06 14:04:51.709659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.368 [2024-11-06 14:04:51.709663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61248 len:8 PRP1 0x0 PRP2 0x0 00:21:23.368 [2024-11-06 14:04:51.709668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.368 [2024-11-06 14:04:51.709673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.368 [2024-11-06 14:04:51.709677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.368 [2024-11-06 14:04:51.709681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61256 len:8 PRP1 0x0 PRP2 0x0 00:21:23.368 [2024-11-06 14:04:51.709686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.368 [2024-11-06 14:04:51.709692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.368 [2024-11-06 14:04:51.709696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.368 [2024-11-06 14:04:51.709700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61264 len:8 PRP1 0x0 PRP2 0x0 00:21:23.368 [2024-11-06 14:04:51.709705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.368 [2024-11-06 14:04:51.709710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.368 [2024-11-06 14:04:51.709714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.368 [2024-11-06 14:04:51.709718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61272 len:8 PRP1 0x0 PRP2 0x0 00:21:23.368 [2024-11-06 14:04:51.709730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.368 [2024-11-06 14:04:51.709736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.368 [2024-11-06 14:04:51.709740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.368 [2024-11-06 14:04:51.709745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61280 len:8 PRP1 0x0 PRP2 0x0 00:21:23.368 [2024-11-06 14:04:51.709750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.368 [2024-11-06 14:04:51.709755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.368 [2024-11-06 14:04:51.709759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.368 [2024-11-06 14:04:51.709763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61288 len:8 PRP1 0x0 PRP2 0x0 00:21:23.368 [2024-11-06 14:04:51.709769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.368 [2024-11-06 14:04:51.709774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.368 [2024-11-06 14:04:51.709778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.368 [2024-11-06 14:04:51.709782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61296 len:8 PRP1 0x0 PRP2 0x0 00:21:23.368 [2024-11-06 14:04:51.709787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.368 [2024-11-06 14:04:51.709792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.368 [2024-11-06 14:04:51.709796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.368 [2024-11-06 14:04:51.709800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61304 len:8 PRP1 0x0 PRP2 0x0 00:21:23.368 [2024-11-06 14:04:51.709805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.368 [2024-11-06 14:04:51.709836] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:23.368 [2024-11-06 14:04:51.709852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:23.368 [2024-11-06 14:04:51.709858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.368 [2024-11-06 14:04:51.709864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:23.368 [2024-11-06 14:04:51.709869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.368 [2024-11-06 14:04:51.709875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:23.368 [2024-11-06 14:04:51.709880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.368 [2024-11-06 14:04:51.709886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:23.368 [2024-11-06 14:04:51.709891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.369 [2024-11-06 14:04:51.709897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:21:23.369 [2024-11-06 14:04:51.709916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x242ad80 (9): Bad file descriptor 00:21:23.369 [2024-11-06 14:04:51.712333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:21:23.369 [2024-11-06 14:04:51.776815] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:21:23.369 12338.60 IOPS, 48.20 MiB/s [2024-11-06T13:05:02.653Z] 12441.33 IOPS, 48.60 MiB/s [2024-11-06T13:05:02.653Z] 12533.29 IOPS, 48.96 MiB/s [2024-11-06T13:05:02.653Z] 12617.12 IOPS, 49.29 MiB/s [2024-11-06T13:05:02.653Z] [2024-11-06 14:04:56.043554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.369 [2024-11-06 14:04:56.043583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.369 [2024-11-06 14:04:56.043595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.369 [2024-11-06 14:04:56.043601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.369 [2024-11-06 14:04:56.043609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.369 [2024-11-06 14:04:56.043614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.369 [2024-11-06 14:04:56.043621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.369 [2024-11-06 14:04:56.043626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.369 [2024-11-06 14:04:56.043633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.369 [2024-11-06 14:04:56.043639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.369 [2024-11-06 14:04:56.043645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.369 [2024-11-06 14:04:56.043651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.369 [2024-11-06 14:04:56.043657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.369 [2024-11-06 14:04:56.043663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.369 [2024-11-06 14:04:56.043669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.369 [2024-11-06 14:04:56.043674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.369 [2024-11-06 14:04:56.043680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.369 [2024-11-06 14:04:56.043686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.369 [2024-11-06 14:04:56.043692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.369 [2024-11-06 14:04:56.043697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.369 [2024-11-06 14:04:56.043704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.369 [2024-11-06 14:04:56.043709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.369 [2024-11-06 14:04:56.043716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.369 [2024-11-06 14:04:56.043721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.369 [2024-11-06 14:04:56.043731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.369 [2024-11-06 14:04:56.043736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.369 [2024-11-06 14:04:56.043743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.369 [2024-11-06 14:04:56.043748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.369 [2024-11-06 14:04:56.043755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.369 [2024-11-06 14:04:56.043760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.369 [2024-11-06 14:04:56.043766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.369 [2024-11-06 14:04:56.043771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.369 [2024-11-06 14:04:56.043778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.369 [2024-11-06 14:04:56.043783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.369 [2024-11-06 14:04:56.043789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.369 [2024-11-06 14:04:56.043794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.369 [2024-11-06 14:04:56.043800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.369 [2024-11-06 14:04:56.043805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.369 [2024-11-06 14:04:56.043812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.369 [2024-11-06 14:04:56.043817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.369 [2024-11-06 14:04:56.043823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.369 [2024-11-06 14:04:56.043828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.369 [2024-11-06 14:04:56.043835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.369 [2024-11-06 14:04:56.043840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.369 [2024-11-06 14:04:56.043846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.369 [2024-11-06 14:04:56.043851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.369 [2024-11-06 14:04:56.043858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.369 [2024-11-06 14:04:56.043863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.369 [2024-11-06 14:04:56.043869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.369 [2024-11-06 14:04:56.043875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.369 [2024-11-06 14:04:56.043882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.369 [2024-11-06 14:04:56.043887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.369 [2024-11-06 14:04:56.043893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.369 [2024-11-06 14:04:56.043898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.369 [2024-11-06 14:04:56.043905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.369 [2024-11-06 14:04:56.043910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.369 [2024-11-06 14:04:56.043916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.369 [2024-11-06 14:04:56.043921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.369 [2024-11-06 14:04:56.043927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.369 [2024-11-06 14:04:56.043933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.369 [2024-11-06 14:04:56.043939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.369 [2024-11-06 14:04:56.043945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.369 [2024-11-06 14:04:56.043951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.370 [2024-11-06 14:04:56.043956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.370 [2024-11-06 14:04:56.043963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.370 [2024-11-06 14:04:56.043968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.370 [2024-11-06 14:04:56.043974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.370 [2024-11-06 14:04:56.043979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.370 [2024-11-06 14:04:56.043986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.370 [2024-11-06 14:04:56.043991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.370 [2024-11-06 14:04:56.043997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.370 [2024-11-06 14:04:56.044002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.370 [2024-11-06 14:04:56.044009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.370 [2024-11-06 14:04:56.044014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.370 [2024-11-06 14:04:56.044021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.370 [2024-11-06 14:04:56.044026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.370 [2024-11-06 14:04:56.044033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.370 [2024-11-06 14:04:56.044038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.370 [2024-11-06 14:04:56.044044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.370 [2024-11-06 14:04:56.044049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.370 [2024-11-06 14:04:56.044056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.370 [2024-11-06 14:04:56.044061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.370 [2024-11-06 14:04:56.044067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.370 [2024-11-06 14:04:56.044072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.370 [2024-11-06 14:04:56.044078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.370 [2024-11-06 14:04:56.044083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.370 [2024-11-06 14:04:56.044090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.370 [2024-11-06 14:04:56.044095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.370 [2024-11-06 14:04:56.044102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.370 [2024-11-06 14:04:56.044107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.370 [2024-11-06 14:04:56.044113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.370 [2024-11-06 14:04:56.044118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.370 [2024-11-06 14:04:56.044125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.370 [2024-11-06 14:04:56.044130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.370 [2024-11-06 14:04:56.044137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.370 [2024-11-06 14:04:56.044142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.370 [2024-11-06 14:04:56.044148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.370 [2024-11-06 14:04:56.044153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.370 [2024-11-06 14:04:56.044160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.370 [2024-11-06 14:04:56.044166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.370 [2024-11-06 14:04:56.044172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.370 [2024-11-06 14:04:56.044178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.370 [2024-11-06 14:04:56.044184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.370 [2024-11-06 14:04:56.044189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.370 [2024-11-06 14:04:56.044195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.370 [2024-11-06 14:04:56.044201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.370 [2024-11-06 14:04:56.044207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.370 [2024-11-06 14:04:56.044212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.370 [2024-11-06 14:04:56.044218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.370 [2024-11-06 14:04:56.044224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.370 [2024-11-06 14:04:56.044230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.370 [2024-11-06 14:04:56.044235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.370 [2024-11-06 14:04:56.044241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.370 [2024-11-06 14:04:56.044251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.370 [2024-11-06 14:04:56.044257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.370 [2024-11-06 14:04:56.044262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.370 [2024-11-06 14:04:56.044269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.370 [2024-11-06 14:04:56.044274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.370 [2024-11-06 14:04:56.044280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.370 [2024-11-06 14:04:56.044285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.370 [2024-11-06 14:04:56.044292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.370 [2024-11-06 14:04:56.044297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.370 [2024-11-06 14:04:56.044304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.370 [2024-11-06 14:04:56.044309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.370 [2024-11-06 14:04:56.044316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.370 [2024-11-06 14:04:56.044322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.370 [2024-11-06 14:04:56.044328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.371 [2024-11-06 14:04:56.044333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.371 [2024-11-06 14:04:56.044340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.371 [2024-11-06 14:04:56.044345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.371 [2024-11-06 14:04:56.044351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.371 [2024-11-06 14:04:56.044356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.371 [2024-11-06 14:04:56.044362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.371 [2024-11-06 14:04:56.044368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.371 [2024-11-06 14:04:56.044374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.371 [2024-11-06 14:04:56.044379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.371 [2024-11-06 14:04:56.044386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.371 [2024-11-06 14:04:56.044391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.371 [2024-11-06 14:04:56.044398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.371 [2024-11-06 14:04:56.044403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.371 [2024-11-06 14:04:56.044409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.371 [2024-11-06 14:04:56.044414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.371 [2024-11-06 14:04:56.044421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.371 [2024-11-06 14:04:56.044426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.371 [2024-11-06 14:04:56.044432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.371 [2024-11-06 14:04:56.044438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.371 [2024-11-06 14:04:56.044444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.371 [2024-11-06 14:04:56.044449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.371 [2024-11-06 14:04:56.044455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.371 [2024-11-06 14:04:56.044461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.371 [2024-11-06 14:04:56.044468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.371 [2024-11-06 14:04:56.044473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.371 [2024-11-06 14:04:56.044480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.371 [2024-11-06 14:04:56.044485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.371 [2024-11-06 14:04:56.044492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.371 [2024-11-06 14:04:56.044497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.371 [2024-11-06 14:04:56.044504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.371 [2024-11-06 14:04:56.044509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.371 [2024-11-06 14:04:56.044516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.371 [2024-11-06 14:04:56.044521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.371 [2024-11-06 14:04:56.044527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.371 [2024-11-06 14:04:56.044532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.371 [2024-11-06 14:04:56.044539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.371 [2024-11-06 14:04:56.044544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.371 [2024-11-06 14:04:56.044550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.371 [2024-11-06 14:04:56.044555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.371 [2024-11-06 14:04:56.044561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.371 [2024-11-06 14:04:56.044567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.371 [2024-11-06 14:04:56.044573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.371 [2024-11-06 14:04:56.044578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.371 [2024-11-06 14:04:56.044585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.371 [2024-11-06 14:04:56.044590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.371 [2024-11-06 14:04:56.044597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.371 [2024-11-06 14:04:56.044602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.371 [2024-11-06 14:04:56.044609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.371 [2024-11-06 14:04:56.044615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.371 [2024-11-06 14:04:56.044622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.371 [2024-11-06 14:04:56.044627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.371 [2024-11-06 14:04:56.044633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.371 [2024-11-06 14:04:56.044638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.371 [2024-11-06 14:04:56.044645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.371 [2024-11-06 14:04:56.044651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.371 [2024-11-06 14:04:56.044657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.371 [2024-11-06 14:04:56.044662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.371 [2024-11-06 14:04:56.044669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.371 [2024-11-06 14:04:56.044674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.371 [2024-11-06 14:04:56.044680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.371 [2024-11-06 14:04:56.044685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.371 [2024-11-06 14:04:56.044692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.371 [2024-11-06 14:04:56.044697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.371 [2024-11-06 14:04:56.044703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.371 [2024-11-06 14:04:56.044709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.371 [2024-11-06 14:04:56.044715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.371 [2024-11-06 14:04:56.044720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.371 [2024-11-06 14:04:56.044727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.371 [2024-11-06 14:04:56.044732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.371 [2024-11-06 14:04:56.044738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.371 [2024-11-06 14:04:56.044743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.371 [2024-11-06 14:04:56.044750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.371 [2024-11-06 14:04:56.044754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.372 [2024-11-06 14:04:56.044762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.372 [2024-11-06 14:04:56.044767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.372 [2024-11-06 14:04:56.044773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.372 [2024-11-06 14:04:56.044779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.372 [2024-11-06 14:04:56.044785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.372 [2024-11-06 14:04:56.044790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.372 [2024-11-06 14:04:56.044797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.372 [2024-11-06 14:04:56.044802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.372 [2024-11-06 14:04:56.044808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.372 [2024-11-06 14:04:56.044813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.372 [2024-11-06 14:04:56.044820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.372 [2024-11-06 14:04:56.044825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.372 [2024-11-06 14:04:56.044831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.372 [2024-11-06 14:04:56.044836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.372 [2024-11-06 14:04:56.044843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.372 [2024-11-06 14:04:56.044848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.372 [2024-11-06 14:04:56.044855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.372 [2024-11-06 14:04:56.044860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.372 [2024-11-06 14:04:56.044867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.372 [2024-11-06 14:04:56.044873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.372 [2024-11-06 14:04:56.044879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.372 [2024-11-06 14:04:56.044884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.372 [2024-11-06 14:04:56.044891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.372 [2024-11-06 14:04:56.044896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.372 [2024-11-06 14:04:56.044902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.372 [2024-11-06 14:04:56.044908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.372 [2024-11-06 14:04:56.044917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.372 [2024-11-06 14:04:56.044923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.372 [2024-11-06 14:04:56.044930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.372 [2024-11-06 14:04:56.044935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.372 [2024-11-06 14:04:56.044941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.372 [2024-11-06 14:04:56.044946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.372 [2024-11-06 14:04:56.044952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.372 [2024-11-06 14:04:56.044958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.372 [2024-11-06 14:04:56.044964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.372 [2024-11-06 14:04:56.044969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.372 [2024-11-06 14:04:56.044976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.372 [2024-11-06 14:04:56.044981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.372 [2024-11-06 14:04:56.044987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.372 [2024-11-06 14:04:56.044992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.372 [2024-11-06 14:04:56.044999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.372 [2024-11-06 14:04:56.045004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.372 [2024-11-06 14:04:56.045010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.372 [2024-11-06 14:04:56.045015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.372 [2024-11-06 14:04:56.045022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.372 [2024-11-06 14:04:56.045027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.372 [2024-11-06 14:04:56.045033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.372 [2024-11-06 14:04:56.045038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.372 [2024-11-06 14:04:56.045044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.372 [2024-11-06 14:04:56.045050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.372 [2024-11-06 14:04:56.045056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.372 [2024-11-06 14:04:56.045062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.372 [2024-11-06 14:04:56.045069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.372 [2024-11-06 14:04:56.045074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.372 [2024-11-06 14:04:56.045089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:23.372 [2024-11-06 14:04:56.045094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:23.372 [2024-11-06 14:04:56.045102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12808 len:8 PRP1 0x0 PRP2 0x0 00:21:23.372 [2024-11-06 14:04:56.045108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.372 [2024-11-06 14:04:56.045141] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:23.372 [2024-11-06 14:04:56.045157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:23.372 [2024-11-06 14:04:56.045163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.372 [2024-11-06 14:04:56.045169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:23.372 [2024-11-06 14:04:56.045174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.372 [2024-11-06 14:04:56.045179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:23.372 [2024-11-06 14:04:56.045184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.372 [2024-11-06 14:04:56.045190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:23.372 [2024-11-06 14:04:56.045195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.372 [2024-11-06 14:04:56.045200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:21:23.372 [2024-11-06 14:04:56.047643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:21:23.372 [2024-11-06 14:04:56.047664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x242ad80 (9): Bad file descriptor 00:21:23.372 [2024-11-06 14:04:56.071099] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:21:23.372 12636.78 IOPS, 49.36 MiB/s [2024-11-06T13:05:02.656Z] 12709.80 IOPS, 49.65 MiB/s [2024-11-06T13:05:02.656Z] 12765.18 IOPS, 49.86 MiB/s [2024-11-06T13:05:02.656Z] 12815.08 IOPS, 50.06 MiB/s [2024-11-06T13:05:02.656Z] 12849.62 IOPS, 50.19 MiB/s [2024-11-06T13:05:02.656Z] 12889.93 IOPS, 50.35 MiB/s [2024-11-06T13:05:02.656Z] 12930.87 IOPS, 50.51 MiB/s 00:21:23.372 Latency(us) 00:21:23.372 [2024-11-06T13:05:02.656Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.373 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:23.373 Verification LBA range: start 0x0 length 0x4000 00:21:23.373 NVMe0n1 : 15.00 12927.86 50.50 414.09 0.00 9572.88 366.93 15837.87 00:21:23.373 [2024-11-06T13:05:02.657Z] =================================================================================================================== 00:21:23.373 [2024-11-06T13:05:02.657Z] Total : 12927.86 50.50 414.09 0.00 9572.88 366.93 15837.87 00:21:23.373 Received shutdown signal, test time was about 15.000000 seconds 00:21:23.373 00:21:23.373 Latency(us) 00:21:23.373 [2024-11-06T13:05:02.657Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.373 [2024-11-06T13:05:02.657Z] =================================================================================================================== 00:21:23.373 [2024-11-06T13:05:02.657Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:23.373 14:05:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:23.373 14:05:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:21:23.373 14:05:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:21:23.373 14:05:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=971251 00:21:23.373 14:05:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 971251 /var/tmp/bdevperf.sock 00:21:23.373 14:05:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 971251 ']' 00:21:23.373 14:05:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:23.373 14:05:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:23.373 14:05:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:23.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:23.373 14:05:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:23.373 14:05:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:23.373 14:05:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:23.373 14:05:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:23.373 14:05:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:21:23.373 14:05:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:23.633 [2024-11-06 14:05:02.769461] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:23.633 14:05:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:23.891 [2024-11-06 14:05:02.925864] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:23.891 14:05:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:24.150 NVMe0n1 00:21:24.150 14:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:24.409 00:21:24.409 14:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:24.668 00:21:24.668 14:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:24.668 14:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:21:24.928 14:05:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:25.189 14:05:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:21:28.480 14:05:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:28.480 14:05:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:21:28.480 14:05:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:28.480 14:05:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=972259 00:21:28.480 14:05:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 972259 00:21:29.419 { 00:21:29.419 "results": [ 00:21:29.419 { 00:21:29.419 "job": "NVMe0n1", 00:21:29.419 "core_mask": "0x1", 00:21:29.419 "workload": "verify", 00:21:29.419 "status": "finished", 00:21:29.419 "verify_range": { 00:21:29.419 "start": 0, 00:21:29.419 "length": 16384 00:21:29.419 }, 00:21:29.419 "queue_depth": 128, 00:21:29.419 "io_size": 4096, 00:21:29.419 "runtime": 1.047933, 00:21:29.419 "iops": 12614.35607047397, 00:21:29.419 "mibps": 49.274828400288946, 00:21:29.419 "io_failed": 0, 00:21:29.419 "io_timeout": 0, 00:21:29.419 "avg_latency_us": 9717.742699649494, 00:21:29.419 "min_latency_us": 2143.5733333333333, 00:21:29.419 "max_latency_us": 43472.21333333333 00:21:29.419 } 00:21:29.419 ], 00:21:29.419 "core_count": 1 00:21:29.419 } 00:21:29.420 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:29.420 [2024-11-06 14:05:02.464504] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:21:29.420 [2024-11-06 14:05:02.464561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid971251 ] 00:21:29.420 [2024-11-06 14:05:02.530217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.420 [2024-11-06 14:05:02.559126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.420 [2024-11-06 14:05:04.198546] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:29.420 [2024-11-06 14:05:04.198585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.420 [2024-11-06 14:05:04.198594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.420 [2024-11-06 14:05:04.198601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.420 [2024-11-06 14:05:04.198606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.420 [2024-11-06 14:05:04.198612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.420 [2024-11-06 14:05:04.198618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.420 [2024-11-06 14:05:04.198623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.420 [2024-11-06 14:05:04.198628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.420 [2024-11-06 14:05:04.198634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:21:29.420 [2024-11-06 14:05:04.198654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:21:29.420 [2024-11-06 14:05:04.198666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252cd80 (9): Bad file descriptor 00:21:29.420 [2024-11-06 14:05:04.290446] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:21:29.420 Running I/O for 1 seconds... 00:21:29.420 13091.00 IOPS, 51.14 MiB/s 00:21:29.420 Latency(us) 00:21:29.420 [2024-11-06T13:05:08.704Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.420 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:29.420 Verification LBA range: start 0x0 length 0x4000 00:21:29.420 NVMe0n1 : 1.05 12614.36 49.27 0.00 0.00 9717.74 2143.57 43472.21 00:21:29.420 [2024-11-06T13:05:08.704Z] =================================================================================================================== 00:21:29.420 [2024-11-06T13:05:08.704Z] Total : 12614.36 49.27 0.00 0.00 9717.74 2143.57 43472.21 00:21:29.420 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:29.420 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:21:29.420 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:29.679 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:29.679 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:21:29.939 14:05:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:29.939 14:05:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:21:33.230 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:33.230 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:21:33.230 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 971251 00:21:33.230 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 971251 ']' 00:21:33.230 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 971251 00:21:33.230 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:21:33.230 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:33.230 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 971251 00:21:33.230 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:33.231 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:33.231 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 971251' 00:21:33.231 killing process with pid 971251 00:21:33.231 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 971251 00:21:33.231 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 971251 00:21:33.231 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:21:33.231 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:33.490 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:33.490 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:33.490 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:21:33.490 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:33.490 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:21:33.490 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:33.490 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:21:33.490 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:33.490 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:33.490 rmmod nvme_tcp 00:21:33.490 rmmod nvme_fabrics 00:21:33.490 rmmod nvme_keyring 00:21:33.490 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:33.490 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:21:33.490 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:21:33.490 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 967224 ']' 00:21:33.490 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 967224 00:21:33.490 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 967224 ']' 00:21:33.490 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 967224 00:21:33.490 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:21:33.490 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:33.490 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 967224 00:21:33.490 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:33.490 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:33.490 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 967224' 00:21:33.490 killing process with pid 967224 00:21:33.490 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 967224 00:21:33.490 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 967224 00:21:33.748 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:33.748 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:33.748 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:33.748 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:21:33.748 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:21:33.748 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:33.748 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:21:33.748 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:33.748 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:33.748 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.748 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:33.748 14:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.653 14:05:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:35.653 00:21:35.653 real 0m36.044s 00:21:35.653 user 1m56.041s 00:21:35.653 sys 0m6.501s 00:21:35.653 14:05:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:35.653 14:05:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:35.653 ************************************ 00:21:35.653 END TEST nvmf_failover 00:21:35.653 ************************************ 00:21:35.653 14:05:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:35.653 14:05:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:35.653 14:05:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:35.653 14:05:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.994 ************************************ 00:21:35.994 START TEST nvmf_host_discovery 00:21:35.994 ************************************ 00:21:35.994 14:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:35.994 * Looking for test storage... 00:21:35.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:35.994 14:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:35.994 14:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:21:35.994 14:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:35.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.994 --rc genhtml_branch_coverage=1 00:21:35.994 --rc genhtml_function_coverage=1 00:21:35.994 --rc genhtml_legend=1 00:21:35.994 --rc geninfo_all_blocks=1 00:21:35.994 --rc geninfo_unexecuted_blocks=1 00:21:35.994 00:21:35.994 ' 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:35.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.994 --rc genhtml_branch_coverage=1 00:21:35.994 --rc genhtml_function_coverage=1 00:21:35.994 --rc genhtml_legend=1 00:21:35.994 --rc geninfo_all_blocks=1 00:21:35.994 --rc geninfo_unexecuted_blocks=1 00:21:35.994 00:21:35.994 ' 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:35.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.994 --rc genhtml_branch_coverage=1 00:21:35.994 --rc genhtml_function_coverage=1 00:21:35.994 --rc genhtml_legend=1 00:21:35.994 --rc geninfo_all_blocks=1 00:21:35.994 --rc geninfo_unexecuted_blocks=1 00:21:35.994 00:21:35.994 ' 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:35.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.994 --rc genhtml_branch_coverage=1 00:21:35.994 --rc genhtml_function_coverage=1 00:21:35.994 --rc genhtml_legend=1 00:21:35.994 --rc geninfo_all_blocks=1 00:21:35.994 --rc geninfo_unexecuted_blocks=1 00:21:35.994 00:21:35.994 ' 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:35.994 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:35.995 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:21:35.995 14:05:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.362 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:41.362 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:21:41.362 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:41.362 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:41.362 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:41.362 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:41.362 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:41.362 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:21:41.362 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:41.362 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:21:41.362 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:41.363 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:41.363 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:41.363 Found net devices under 0000:31:00.0: cvl_0_0 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:41.363 Found net devices under 0000:31:00.1: cvl_0_1 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:41.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:41.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.562 ms 00:21:41.363 00:21:41.363 --- 10.0.0.2 ping statistics --- 00:21:41.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.363 rtt min/avg/max/mdev = 0.562/0.562/0.562/0.000 ms 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:41.363 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:41.363 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:21:41.363 00:21:41.363 --- 10.0.0.1 ping statistics --- 00:21:41.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.363 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:41.363 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:41.364 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:41.364 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:41.364 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:41.364 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:41.364 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.364 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=977933 00:21:41.364 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 977933 00:21:41.364 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 977933 ']' 00:21:41.364 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.364 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:41.364 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.364 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:41.364 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.364 14:05:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:41.364 [2024-11-06 14:05:20.403936] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:21:41.364 [2024-11-06 14:05:20.403998] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.364 [2024-11-06 14:05:20.482619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.364 [2024-11-06 14:05:20.518278] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.364 [2024-11-06 14:05:20.518316] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.364 [2024-11-06 14:05:20.518322] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:41.364 [2024-11-06 14:05:20.518327] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:41.364 [2024-11-06 14:05:20.518332] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.364 [2024-11-06 14:05:20.518921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.972 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:41.972 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:21:41.972 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:41.972 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:41.972 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.972 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.972 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:41.972 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.972 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.972 [2024-11-06 14:05:21.206981] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.972 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.972 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:41.972 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.972 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.972 [2024-11-06 14:05:21.215389] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:41.972 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.972 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:41.972 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.972 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.972 null0 00:21:41.973 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.973 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:41.973 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.973 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.973 null1 00:21:41.973 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.973 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:41.973 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.973 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.973 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.973 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=977964 00:21:41.973 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 977964 /tmp/host.sock 00:21:41.973 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 977964 ']' 00:21:41.973 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:21:41.973 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:41.973 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:41.973 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:41.973 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:41.973 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.973 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:42.231 [2024-11-06 14:05:21.270534] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:21:42.231 [2024-11-06 14:05:21.270572] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid977964 ] 00:21:42.231 [2024-11-06 14:05:21.340263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.231 [2024-11-06 14:05:21.376394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.231 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:42.231 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:21:42.231 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:42.231 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:42.231 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.231 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.231 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.231 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:42.231 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.231 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.231 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.231 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:21:42.231 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:21:42.231 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:42.231 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:42.231 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:42.231 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.231 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.231 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:42.231 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.231 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:42.231 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:21:42.231 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:42.231 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:42.231 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:42.231 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:42.231 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.231 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.490 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.490 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:21:42.490 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:42.490 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.490 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.490 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.490 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:21:42.490 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:42.490 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:42.490 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:42.490 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:42.490 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.490 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.490 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.490 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:42.490 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:21:42.490 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:42.490 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.490 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.490 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:42.490 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:42.490 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:42.490 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.490 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:21:42.490 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:42.490 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.490 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.490 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.490 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.491 [2024-11-06 14:05:21.684303] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.491 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.751 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:42.751 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:21:42.751 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:21:42.751 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:21:42.751 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:42.751 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.751 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.751 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.751 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:42.751 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:42.751 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:21:42.751 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:42.751 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:42.751 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:21:42.751 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:42.751 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:42.751 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:42.751 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.751 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.751 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:42.751 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.751 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:21:42.751 14:05:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:21:43.318 [2024-11-06 14:05:22.472756] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:43.318 [2024-11-06 14:05:22.472777] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:43.318 [2024-11-06 14:05:22.472790] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:43.318 [2024-11-06 14:05:22.559063] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:43.577 [2024-11-06 14:05:22.655094] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:21:43.577 [2024-11-06 14:05:22.655911] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x16ce670:1 started. 00:21:43.577 [2024-11-06 14:05:22.657491] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:43.577 [2024-11-06 14:05:22.657510] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:43.577 [2024-11-06 14:05:22.702853] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x16ce670 was disconnected and freed. delete nvme_qpair. 00:21:43.577 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:43.577 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:43.577 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:21:43.577 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:43.577 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:43.577 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.577 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.577 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:43.577 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:43.577 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.577 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.577 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:21:43.577 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:43.577 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:43.577 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:21:43.577 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:43.577 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:21:43.577 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:21:43.577 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.837 14:05:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:43.837 [2024-11-06 14:05:23.010164] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x16ce850:1 started. 00:21:43.837 [2024-11-06 14:05:23.013725] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x16ce850 was disconnected and freed. delete nvme_qpair. 00:21:43.837 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.837 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:43.837 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:21:43.837 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:21:43.837 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:43.837 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:43.837 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:43.837 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:21:43.837 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:43.837 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:43.837 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:21:43.837 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:43.837 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:43.837 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.837 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.837 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.837 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:43.837 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:43.837 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:21:43.837 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:21:43.837 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:43.837 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.837 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.837 [2024-11-06 14:05:23.064062] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:43.837 [2024-11-06 14:05:23.064631] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:43.837 [2024-11-06 14:05:23.064652] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:43.837 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.837 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:43.837 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:43.838 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:21:43.838 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:43.838 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:43.838 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:21:43.838 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:43.838 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:43.838 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:43.838 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.838 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.838 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:43.838 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.838 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.838 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:21:43.838 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:43.838 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:43.838 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:21:43.838 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:43.838 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:43.838 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:21:43.838 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:43.838 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:43.838 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.838 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.838 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:43.838 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:44.097 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.097 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:44.097 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:21:44.097 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:44.097 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:44.097 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:21:44.097 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:44.097 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:44.097 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:21:44.097 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:44.097 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:44.097 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:44.097 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.097 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:44.097 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:44.097 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.097 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:21:44.097 14:05:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:21:44.097 [2024-11-06 14:05:23.192533] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:44.097 [2024-11-06 14:05:23.298502] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:21:44.098 [2024-11-06 14:05:23.298538] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:44.098 [2024-11-06 14:05:23.298547] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:44.098 [2024-11-06 14:05:23.298553] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:45.037 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:45.037 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:45.037 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:21:45.037 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:45.037 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:45.037 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:45.037 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.037 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.037 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:45.037 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.037 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:45.037 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:21:45.037 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:21:45.037 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:45.037 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:45.037 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:45.037 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:21:45.037 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:45.037 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:45.037 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:21:45.037 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:45.037 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:45.037 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.037 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.037 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.037 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:45.037 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:45.037 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:21:45.037 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:21:45.037 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:45.037 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.037 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.037 [2024-11-06 14:05:24.231819] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:45.037 [2024-11-06 14:05:24.231835] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:45.037 [2024-11-06 14:05:24.234368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.037 [2024-11-06 14:05:24.234381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.037 [2024-11-06 14:05:24.234389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.037 [2024-11-06 14:05:24.234394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.037 [2024-11-06 14:05:24.234400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.037 [2024-11-06 14:05:24.234405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.038 [2024-11-06 14:05:24.234411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.038 [2024-11-06 14:05:24.234416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.038 [2024-11-06 14:05:24.234421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ed90 is same with the state(6) to be set 00:21:45.038 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.038 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:45.038 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:45.038 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:21:45.038 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:45.038 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:45.038 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:21:45.038 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:45.038 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:45.038 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:45.038 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.038 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.038 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:45.038 [2024-11-06 14:05:24.244384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169ed90 (9): Bad file descriptor 00:21:45.038 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.038 [2024-11-06 14:05:24.254419] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:45.038 [2024-11-06 14:05:24.254427] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:45.038 [2024-11-06 14:05:24.254431] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:45.038 [2024-11-06 14:05:24.254435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:45.038 [2024-11-06 14:05:24.254447] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:45.038 [2024-11-06 14:05:24.254737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.038 [2024-11-06 14:05:24.254748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169ed90 with addr=10.0.0.2, port=4420 00:21:45.038 [2024-11-06 14:05:24.254753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ed90 is same with the state(6) to be set 00:21:45.038 [2024-11-06 14:05:24.254762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169ed90 (9): Bad file descriptor 00:21:45.038 [2024-11-06 14:05:24.254769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:45.038 [2024-11-06 14:05:24.254774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:45.038 [2024-11-06 14:05:24.254780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:45.038 [2024-11-06 14:05:24.254785] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:45.038 [2024-11-06 14:05:24.254789] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:45.038 [2024-11-06 14:05:24.254792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:45.038 [2024-11-06 14:05:24.264476] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:45.038 [2024-11-06 14:05:24.264486] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:45.038 [2024-11-06 14:05:24.264489] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:45.038 [2024-11-06 14:05:24.264493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:45.038 [2024-11-06 14:05:24.264506] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:45.038 [2024-11-06 14:05:24.264793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.038 [2024-11-06 14:05:24.264802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169ed90 with addr=10.0.0.2, port=4420 00:21:45.038 [2024-11-06 14:05:24.264808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ed90 is same with the state(6) to be set 00:21:45.038 [2024-11-06 14:05:24.264816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169ed90 (9): Bad file descriptor 00:21:45.038 [2024-11-06 14:05:24.264824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:45.038 [2024-11-06 14:05:24.264828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:45.038 [2024-11-06 14:05:24.264834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:45.038 [2024-11-06 14:05:24.264838] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:45.038 [2024-11-06 14:05:24.264841] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:45.038 [2024-11-06 14:05:24.264846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:45.038 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.038 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:21:45.038 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:45.038 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:45.038 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:21:45.038 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:45.038 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:45.038 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:21:45.038 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:45.038 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:45.038 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:45.038 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:45.038 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.038 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.038 [2024-11-06 14:05:24.274536] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:45.038 [2024-11-06 14:05:24.274546] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:45.038 [2024-11-06 14:05:24.274549] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:45.038 [2024-11-06 14:05:24.274552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:45.038 [2024-11-06 14:05:24.274562] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:45.038 [2024-11-06 14:05:24.274882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.038 [2024-11-06 14:05:24.274891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169ed90 with addr=10.0.0.2, port=4420 00:21:45.038 [2024-11-06 14:05:24.274899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ed90 is same with the state(6) to be set 00:21:45.038 [2024-11-06 14:05:24.274907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169ed90 (9): Bad file descriptor 00:21:45.038 [2024-11-06 14:05:24.274914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:45.038 [2024-11-06 14:05:24.274918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:45.038 [2024-11-06 14:05:24.274923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:45.038 [2024-11-06 14:05:24.274928] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:45.038 [2024-11-06 14:05:24.274931] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:45.038 [2024-11-06 14:05:24.274934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:45.038 [2024-11-06 14:05:24.284592] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:45.038 [2024-11-06 14:05:24.284603] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:45.038 [2024-11-06 14:05:24.284606] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:45.038 [2024-11-06 14:05:24.284609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:45.038 [2024-11-06 14:05:24.284619] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:45.038 [2024-11-06 14:05:24.284809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.038 [2024-11-06 14:05:24.284817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169ed90 with addr=10.0.0.2, port=4420 00:21:45.038 [2024-11-06 14:05:24.284823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ed90 is same with the state(6) to be set 00:21:45.038 [2024-11-06 14:05:24.284831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169ed90 (9): Bad file descriptor 00:21:45.038 [2024-11-06 14:05:24.284838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:45.038 [2024-11-06 14:05:24.284842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:45.038 [2024-11-06 14:05:24.284847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:45.038 [2024-11-06 14:05:24.284852] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:45.038 [2024-11-06 14:05:24.284855] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:45.038 [2024-11-06 14:05:24.284858] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:45.038 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.038 [2024-11-06 14:05:24.294649] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:45.039 [2024-11-06 14:05:24.294657] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:45.039 [2024-11-06 14:05:24.294660] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:45.039 [2024-11-06 14:05:24.294663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:45.039 [2024-11-06 14:05:24.294673] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:45.039 [2024-11-06 14:05:24.294959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.039 [2024-11-06 14:05:24.294967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169ed90 with addr=10.0.0.2, port=4420 00:21:45.039 [2024-11-06 14:05:24.294972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ed90 is same with the state(6) to be set 00:21:45.039 [2024-11-06 14:05:24.294979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169ed90 (9): Bad file descriptor 00:21:45.039 [2024-11-06 14:05:24.294987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:45.039 [2024-11-06 14:05:24.294991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:45.039 [2024-11-06 14:05:24.294996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:45.039 [2024-11-06 14:05:24.295000] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:45.039 [2024-11-06 14:05:24.295003] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:45.039 [2024-11-06 14:05:24.295006] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:45.039 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:45.039 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:21:45.039 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:45.039 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:45.039 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:21:45.039 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:45.039 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:45.039 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:21:45.039 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:45.039 [2024-11-06 14:05:24.304702] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:45.039 [2024-11-06 14:05:24.304711] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:45.039 [2024-11-06 14:05:24.304714] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:45.039 [2024-11-06 14:05:24.304717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:45.039 [2024-11-06 14:05:24.304726] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:45.039 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:45.039 [2024-11-06 14:05:24.305058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.039 [2024-11-06 14:05:24.305068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169ed90 with addr=10.0.0.2, port=4420 00:21:45.039 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:45.039 [2024-11-06 14:05:24.305074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ed90 is same with the state(6) to be set 00:21:45.039 [2024-11-06 14:05:24.305086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169ed90 (9): Bad file descriptor 00:21:45.039 [2024-11-06 14:05:24.305098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:45.039 [2024-11-06 14:05:24.305106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:45.039 [2024-11-06 14:05:24.305112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:45.039 [2024-11-06 14:05:24.305118] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:45.039 [2024-11-06 14:05:24.305124] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:45.039 [2024-11-06 14:05:24.305128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:45.039 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.039 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.039 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:45.039 [2024-11-06 14:05:24.314754] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:45.039 [2024-11-06 14:05:24.314764] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:45.039 [2024-11-06 14:05:24.314767] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:45.039 [2024-11-06 14:05:24.314770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:45.039 [2024-11-06 14:05:24.314781] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:45.039 [2024-11-06 14:05:24.315063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.039 [2024-11-06 14:05:24.315071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169ed90 with addr=10.0.0.2, port=4420 00:21:45.039 [2024-11-06 14:05:24.315076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ed90 is same with the state(6) to be set 00:21:45.039 [2024-11-06 14:05:24.315084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169ed90 (9): Bad file descriptor 00:21:45.039 [2024-11-06 14:05:24.315097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:45.039 [2024-11-06 14:05:24.315102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:45.039 [2024-11-06 14:05:24.315107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:45.039 [2024-11-06 14:05:24.315112] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:45.039 [2024-11-06 14:05:24.315115] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:45.039 [2024-11-06 14:05:24.315118] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:45.039 [2024-11-06 14:05:24.319197] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:45.039 [2024-11-06 14:05:24.319210] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:45.039 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.298 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:21:45.298 14:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:21:46.237 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:21:46.238 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:46.238 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.238 14:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.617 [2024-11-06 14:05:26.565420] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:47.617 [2024-11-06 14:05:26.565435] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:47.617 [2024-11-06 14:05:26.565444] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:47.617 [2024-11-06 14:05:26.652694] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:47.877 [2024-11-06 14:05:26.920966] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:21:47.877 [2024-11-06 14:05:26.921573] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1806830:1 started. 00:21:47.877 [2024-11-06 14:05:26.922899] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:47.877 [2024-11-06 14:05:26.922920] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:47.877 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.877 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:47.877 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:21:47.877 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:47.877 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:47.877 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:47.877 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:47.877 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:47.877 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:47.877 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.877 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.877 request: 00:21:47.877 { 00:21:47.877 "name": "nvme", 00:21:47.877 "trtype": "tcp", 00:21:47.877 "traddr": "10.0.0.2", 00:21:47.877 "adrfam": "ipv4", 00:21:47.877 "trsvcid": "8009", 00:21:47.878 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:47.878 "wait_for_attach": true, 00:21:47.878 "method": "bdev_nvme_start_discovery", 00:21:47.878 "req_id": 1 00:21:47.878 } 00:21:47.878 Got JSON-RPC error response 00:21:47.878 response: 00:21:47.878 { 00:21:47.878 "code": -17, 00:21:47.878 "message": "File exists" 00:21:47.878 } 00:21:47.878 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:47.878 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:21:47.878 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:47.878 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:47.878 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:47.878 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:21:47.878 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:47.878 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:47.878 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:47.878 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:47.878 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.878 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.878 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.878 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:21:47.878 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:21:47.878 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:47.878 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:47.878 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:47.878 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.878 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.878 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:47.878 [2024-11-06 14:05:26.974908] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1806830 was disconnected and freed. delete nvme_qpair. 00:21:47.878 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.878 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:47.878 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:47.878 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:21:47.878 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:47.878 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:47.878 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:47.878 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:47.878 14:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:47.878 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:47.878 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.878 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.878 request: 00:21:47.878 { 00:21:47.878 "name": "nvme_second", 00:21:47.878 "trtype": "tcp", 00:21:47.878 "traddr": "10.0.0.2", 00:21:47.878 "adrfam": "ipv4", 00:21:47.878 "trsvcid": "8009", 00:21:47.878 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:47.878 "wait_for_attach": true, 00:21:47.878 "method": "bdev_nvme_start_discovery", 00:21:47.878 "req_id": 1 00:21:47.878 } 00:21:47.878 Got JSON-RPC error response 00:21:47.878 response: 00:21:47.878 { 00:21:47.878 "code": -17, 00:21:47.878 "message": "File exists" 00:21:47.878 } 00:21:47.878 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:47.878 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:21:47.878 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:47.878 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:47.878 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:47.878 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:21:47.878 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:47.878 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.878 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.878 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:47.878 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:47.878 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:47.878 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.878 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:21:47.878 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:21:47.878 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:47.878 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:47.878 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:47.878 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.878 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:47.878 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.878 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.878 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:47.878 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:47.878 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:21:47.878 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:47.878 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:47.878 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:47.878 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:47.878 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:47.878 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:47.878 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.878 14:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.817 [2024-11-06 14:05:28.085987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.817 [2024-11-06 14:05:28.086010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169ae10 with addr=10.0.0.2, port=8010 00:21:48.817 [2024-11-06 14:05:28.086019] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:48.817 [2024-11-06 14:05:28.086025] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:48.817 [2024-11-06 14:05:28.086030] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:50.195 [2024-11-06 14:05:29.088415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.195 [2024-11-06 14:05:29.088437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169ae10 with addr=10.0.0.2, port=8010 00:21:50.195 [2024-11-06 14:05:29.088446] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:50.195 [2024-11-06 14:05:29.088451] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:50.195 [2024-11-06 14:05:29.088456] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:51.132 [2024-11-06 14:05:30.090447] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:21:51.132 request: 00:21:51.132 { 00:21:51.132 "name": "nvme_second", 00:21:51.132 "trtype": "tcp", 00:21:51.132 "traddr": "10.0.0.2", 00:21:51.132 "adrfam": "ipv4", 00:21:51.132 "trsvcid": "8010", 00:21:51.132 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:51.132 "wait_for_attach": false, 00:21:51.132 "attach_timeout_ms": 3000, 00:21:51.132 "method": "bdev_nvme_start_discovery", 00:21:51.132 "req_id": 1 00:21:51.132 } 00:21:51.132 Got JSON-RPC error response 00:21:51.132 response: 00:21:51.132 { 00:21:51.132 "code": -110, 00:21:51.132 "message": "Connection timed out" 00:21:51.132 } 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 977964 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:51.132 rmmod nvme_tcp 00:21:51.132 rmmod nvme_fabrics 00:21:51.132 rmmod nvme_keyring 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 977933 ']' 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 977933 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 977933 ']' 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 977933 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 977933 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 977933' 00:21:51.132 killing process with pid 977933 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 977933 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 977933 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:51.132 14:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:53.673 00:21:53.673 real 0m17.429s 00:21:53.673 user 0m21.221s 00:21:53.673 sys 0m5.036s 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:53.673 ************************************ 00:21:53.673 END TEST nvmf_host_discovery 00:21:53.673 ************************************ 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.673 ************************************ 00:21:53.673 START TEST nvmf_host_multipath_status 00:21:53.673 ************************************ 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:53.673 * Looking for test storage... 00:21:53.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:53.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.673 --rc genhtml_branch_coverage=1 00:21:53.673 --rc genhtml_function_coverage=1 00:21:53.673 --rc genhtml_legend=1 00:21:53.673 --rc geninfo_all_blocks=1 00:21:53.673 --rc geninfo_unexecuted_blocks=1 00:21:53.673 00:21:53.673 ' 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:53.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.673 --rc genhtml_branch_coverage=1 00:21:53.673 --rc genhtml_function_coverage=1 00:21:53.673 --rc genhtml_legend=1 00:21:53.673 --rc geninfo_all_blocks=1 00:21:53.673 --rc geninfo_unexecuted_blocks=1 00:21:53.673 00:21:53.673 ' 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:53.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.673 --rc genhtml_branch_coverage=1 00:21:53.673 --rc genhtml_function_coverage=1 00:21:53.673 --rc genhtml_legend=1 00:21:53.673 --rc geninfo_all_blocks=1 00:21:53.673 --rc geninfo_unexecuted_blocks=1 00:21:53.673 00:21:53.673 ' 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:53.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.673 --rc genhtml_branch_coverage=1 00:21:53.673 --rc genhtml_function_coverage=1 00:21:53.673 --rc genhtml_legend=1 00:21:53.673 --rc geninfo_all_blocks=1 00:21:53.673 --rc geninfo_unexecuted_blocks=1 00:21:53.673 00:21:53.673 ' 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:53.673 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:53.674 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:21:53.674 14:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:58.951 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:58.951 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:58.951 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:58.952 Found net devices under 0000:31:00.0: cvl_0_0 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:58.952 Found net devices under 0000:31:00.1: cvl_0_1 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:58.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:58.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.564 ms 00:21:58.952 00:21:58.952 --- 10.0.0.2 ping statistics --- 00:21:58.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.952 rtt min/avg/max/mdev = 0.564/0.564/0.564/0.000 ms 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:58.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:58.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:21:58.952 00:21:58.952 --- 10.0.0.1 ping statistics --- 00:21:58.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.952 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=984470 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 984470 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 984470 ']' 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:58.952 14:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:58.952 [2024-11-06 14:05:37.976945] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:21:58.952 [2024-11-06 14:05:37.976993] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.952 [2024-11-06 14:05:38.060401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:58.952 [2024-11-06 14:05:38.096335] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:58.952 [2024-11-06 14:05:38.096366] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:58.952 [2024-11-06 14:05:38.096374] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:58.952 [2024-11-06 14:05:38.096380] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:58.952 [2024-11-06 14:05:38.096386] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:58.952 [2024-11-06 14:05:38.097523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.952 [2024-11-06 14:05:38.097528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.521 14:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:59.521 14:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:21:59.521 14:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:59.521 14:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:59.521 14:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:59.521 14:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:59.521 14:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=984470 00:21:59.521 14:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:59.780 [2024-11-06 14:05:38.919639] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:59.780 14:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:00.039 Malloc0 00:22:00.039 14:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:00.039 14:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:00.298 14:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:00.298 [2024-11-06 14:05:39.566823] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:00.557 14:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:00.557 [2024-11-06 14:05:39.723195] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:00.557 14:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:00.557 14:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=984836 00:22:00.557 14:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:00.557 14:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 984836 /var/tmp/bdevperf.sock 00:22:00.557 14:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 984836 ']' 00:22:00.557 14:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:00.557 14:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:00.557 14:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:00.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:00.557 14:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:00.557 14:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:01.495 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:01.495 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:22:01.495 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:01.495 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:01.754 Nvme0n1 00:22:01.754 14:05:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:02.323 Nvme0n1 00:22:02.323 14:05:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:02.323 14:05:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:04.228 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:04.228 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:04.488 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:04.488 14:05:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:05.866 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:05.866 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:05.866 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:05.866 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:05.866 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:05.866 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:05.866 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:05.866 14:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:05.866 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:05.866 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:05.866 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:05.866 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:06.127 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:06.127 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:06.127 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:06.127 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:06.386 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:06.386 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:06.386 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:06.386 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:06.386 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:06.386 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:06.386 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:06.386 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:06.646 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:06.646 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:06.646 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:06.646 14:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:06.905 14:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:07.842 14:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:07.842 14:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:07.842 14:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:07.842 14:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:08.103 14:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:08.103 14:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:08.103 14:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.103 14:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:08.103 14:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.103 14:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:08.362 14:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.362 14:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:08.362 14:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.362 14:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:08.362 14:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:08.362 14:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.622 14:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.622 14:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:08.622 14:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:08.622 14:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.622 14:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.622 14:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:08.622 14:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:08.622 14:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.881 14:05:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.881 14:05:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:08.881 14:05:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:09.140 14:05:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:09.140 14:05:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:10.521 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:10.521 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:10.521 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:10.521 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.521 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:10.521 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:10.521 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.521 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:10.521 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:10.521 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:10.521 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.521 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:10.781 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:10.781 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:10.781 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.781 14:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:10.781 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:10.781 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:10.781 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.781 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:11.039 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:11.039 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:11.039 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.039 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:11.297 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:11.297 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:11.297 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:11.297 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:11.556 14:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:12.496 14:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:12.496 14:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:12.496 14:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:12.496 14:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:12.754 14:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:12.754 14:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:12.754 14:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:12.754 14:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:12.754 14:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:12.754 14:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:12.754 14:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:12.754 14:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:13.014 14:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:13.014 14:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:13.014 14:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:13.014 14:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.273 14:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:13.273 14:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:13.273 14:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.273 14:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:13.273 14:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:13.273 14:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:13.273 14:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:13.273 14:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.532 14:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:13.532 14:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:13.532 14:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:13.532 14:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:13.790 14:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:14.727 14:05:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:14.727 14:05:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:14.727 14:05:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:14.727 14:05:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.986 14:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:14.986 14:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:14.986 14:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.986 14:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:15.245 14:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:15.245 14:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:15.245 14:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:15.245 14:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.245 14:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.245 14:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:15.245 14:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.245 14:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:15.503 14:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.503 14:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:15.503 14:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.503 14:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:15.503 14:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:15.503 14:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:15.503 14:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.503 14:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:15.762 14:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:15.762 14:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:15.762 14:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:16.020 14:05:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:16.020 14:05:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:17.397 14:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:17.397 14:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:17.397 14:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.397 14:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:17.397 14:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:17.397 14:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:17.397 14:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:17.397 14:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.397 14:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.397 14:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:17.397 14:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.397 14:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:17.654 14:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.654 14:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:17.654 14:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:17.654 14:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.654 14:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.654 14:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:17.654 14:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:17.654 14:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.913 14:05:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:17.913 14:05:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:17.913 14:05:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:17.913 14:05:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.171 14:05:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.172 14:05:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:18.172 14:05:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:18.172 14:05:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:18.431 14:05:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:18.690 14:05:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:19.625 14:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:19.625 14:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:19.625 14:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.625 14:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:19.625 14:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:19.625 14:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:19.625 14:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.625 14:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:19.882 14:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:19.883 14:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:19.883 14:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.883 14:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:20.141 14:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:20.141 14:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:20.141 14:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.141 14:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:20.141 14:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:20.141 14:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:20.141 14:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.141 14:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:20.400 14:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:20.400 14:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:20.400 14:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.400 14:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:20.659 14:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:20.659 14:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:22:20.659 14:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:20.659 14:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:20.917 14:06:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:22:21.852 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:22:21.852 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:21.852 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.852 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:22.110 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:22.110 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:22.110 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.110 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:22.110 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:22.110 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:22.110 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.110 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:22.369 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:22.369 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:22.369 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:22.369 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.628 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:22.628 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:22.628 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:22.628 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.628 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:22.628 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:22.628 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.628 14:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:22.887 14:06:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:22.887 14:06:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:22:22.887 14:06:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:22.887 14:06:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:23.146 14:06:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:22:24.231 14:06:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:22:24.231 14:06:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:24.231 14:06:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.231 14:06:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:24.231 14:06:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.231 14:06:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:24.231 14:06:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:24.231 14:06:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.515 14:06:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.515 14:06:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:24.515 14:06:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.515 14:06:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:24.774 14:06:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.774 14:06:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:24.774 14:06:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:24.774 14:06:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.774 14:06:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.774 14:06:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:24.775 14:06:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.775 14:06:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:25.034 14:06:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.034 14:06:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:25.034 14:06:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.034 14:06:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:25.034 14:06:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.034 14:06:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:22:25.034 14:06:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:25.293 14:06:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:25.552 14:06:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:22:26.490 14:06:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:22:26.490 14:06:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:26.490 14:06:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.490 14:06:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:26.749 14:06:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.749 14:06:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:26.749 14:06:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.750 14:06:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:26.750 14:06:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:26.750 14:06:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:26.750 14:06:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.750 14:06:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:27.009 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.009 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:27.009 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:27.009 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.268 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.268 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:27.268 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:27.268 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.268 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.268 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:27.268 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.268 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:27.531 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:27.531 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 984836 00:22:27.531 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 984836 ']' 00:22:27.531 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 984836 00:22:27.531 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:22:27.532 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:27.532 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 984836 00:22:27.532 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:22:27.532 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:22:27.532 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 984836' 00:22:27.532 killing process with pid 984836 00:22:27.532 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 984836 00:22:27.532 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 984836 00:22:27.532 { 00:22:27.532 "results": [ 00:22:27.532 { 00:22:27.532 "job": "Nvme0n1", 00:22:27.532 "core_mask": "0x4", 00:22:27.532 "workload": "verify", 00:22:27.532 "status": "terminated", 00:22:27.532 "verify_range": { 00:22:27.532 "start": 0, 00:22:27.532 "length": 16384 00:22:27.532 }, 00:22:27.532 "queue_depth": 128, 00:22:27.532 "io_size": 4096, 00:22:27.532 "runtime": 25.126509, 00:22:27.532 "iops": 12063.235684670719, 00:22:27.532 "mibps": 47.122014393244996, 00:22:27.532 "io_failed": 0, 00:22:27.532 "io_timeout": 0, 00:22:27.532 "avg_latency_us": 10592.224317881144, 00:22:27.532 "min_latency_us": 887.4666666666667, 00:22:27.532 "max_latency_us": 3019898.88 00:22:27.532 } 00:22:27.532 ], 00:22:27.532 "core_count": 1 00:22:27.532 } 00:22:27.532 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 984836 00:22:27.532 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:27.532 [2024-11-06 14:05:39.774098] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:22:27.532 [2024-11-06 14:05:39.774158] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid984836 ] 00:22:27.532 [2024-11-06 14:05:39.852526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.532 [2024-11-06 14:05:39.887300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:27.532 Running I/O for 90 seconds... 00:22:27.532 12499.00 IOPS, 48.82 MiB/s [2024-11-06T13:06:06.816Z] 12737.50 IOPS, 49.76 MiB/s [2024-11-06T13:06:06.816Z] 12795.67 IOPS, 49.98 MiB/s [2024-11-06T13:06:06.816Z] 12868.25 IOPS, 50.27 MiB/s [2024-11-06T13:06:06.816Z] 12875.60 IOPS, 50.30 MiB/s [2024-11-06T13:06:06.816Z] 12873.00 IOPS, 50.29 MiB/s [2024-11-06T13:06:06.816Z] 12886.86 IOPS, 50.34 MiB/s [2024-11-06T13:06:06.816Z] 12878.38 IOPS, 50.31 MiB/s [2024-11-06T13:06:06.816Z] 12893.11 IOPS, 50.36 MiB/s [2024-11-06T13:06:06.816Z] 12894.90 IOPS, 50.37 MiB/s [2024-11-06T13:06:06.816Z] 12897.09 IOPS, 50.38 MiB/s [2024-11-06T13:06:06.816Z] [2024-11-06 14:05:52.796215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:115560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.532 [2024-11-06 14:05:52.796259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:27.532 [2024-11-06 14:05:52.796299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:115568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.532 [2024-11-06 14:05:52.796309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:27.532 [2024-11-06 14:05:52.796323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:115576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.532 [2024-11-06 14:05:52.796332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:27.532 [2024-11-06 14:05:52.796346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:115584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.532 [2024-11-06 14:05:52.796353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:27.532 [2024-11-06 14:05:52.796368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:115592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.532 [2024-11-06 14:05:52.796376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.532 [2024-11-06 14:05:52.796390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:115600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.532 [2024-11-06 14:05:52.796398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.532 [2024-11-06 14:05:52.796413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:115608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.532 [2024-11-06 14:05:52.796422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:27.532 [2024-11-06 14:05:52.796437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:115616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.532 [2024-11-06 14:05:52.796445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:27.532 [2024-11-06 14:05:52.796581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:115624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.532 [2024-11-06 14:05:52.796593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:27.532 [2024-11-06 14:05:52.796608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:115632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.532 [2024-11-06 14:05:52.796624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:27.532 [2024-11-06 14:05:52.796640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.532 [2024-11-06 14:05:52.796649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:27.532 [2024-11-06 14:05:52.796665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:115648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.532 [2024-11-06 14:05:52.796674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:27.532 [2024-11-06 14:05:52.796690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:115656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.532 [2024-11-06 14:05:52.796699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:27.532 [2024-11-06 14:05:52.796715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:115664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.532 [2024-11-06 14:05:52.796724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:27.532 [2024-11-06 14:05:52.796740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:115672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.532 [2024-11-06 14:05:52.796749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:27.532 [2024-11-06 14:05:52.796765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:115680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.532 [2024-11-06 14:05:52.796774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:27.532 [2024-11-06 14:05:52.797064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:115688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.532 [2024-11-06 14:05:52.797076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:27.532 [2024-11-06 14:05:52.797094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:114984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.532 [2024-11-06 14:05:52.797103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:27.532 [2024-11-06 14:05:52.797121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:114992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.532 [2024-11-06 14:05:52.797130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:27.532 [2024-11-06 14:05:52.797148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:115000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.532 [2024-11-06 14:05:52.797157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:27.532 [2024-11-06 14:05:52.797174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:115008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.532 [2024-11-06 14:05:52.797183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:27.532 [2024-11-06 14:05:52.797200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:115016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.532 [2024-11-06 14:05:52.797212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:27.532 [2024-11-06 14:05:52.797230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.532 [2024-11-06 14:05:52.797239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:27.532 [2024-11-06 14:05:52.797261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:115032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.532 [2024-11-06 14:05:52.797270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:27.532 [2024-11-06 14:05:52.797286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:115040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.532 [2024-11-06 14:05:52.797295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:27.532 [2024-11-06 14:05:52.797313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:115048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.532 [2024-11-06 14:05:52.797321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:27.532 [2024-11-06 14:05:52.797339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:115056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.532 [2024-11-06 14:05:52.797348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.797365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:115064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.533 [2024-11-06 14:05:52.797374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.797391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:115072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.533 [2024-11-06 14:05:52.797400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.797417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:115080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.533 [2024-11-06 14:05:52.797426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.797443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:115088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.533 [2024-11-06 14:05:52.797452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.797469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:115096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.533 [2024-11-06 14:05:52.797478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.797495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:115104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.533 [2024-11-06 14:05:52.797505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.797522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:115696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.533 [2024-11-06 14:05:52.797531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.797550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:115704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.533 [2024-11-06 14:05:52.797559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.797577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:115712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.533 [2024-11-06 14:05:52.797585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.797603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:115720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.533 [2024-11-06 14:05:52.797611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.797628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:115728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.533 [2024-11-06 14:05:52.797637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.797654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:115736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.533 [2024-11-06 14:05:52.797664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.797681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:115744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.533 [2024-11-06 14:05:52.797689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.797775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:115752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.533 [2024-11-06 14:05:52.797786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.797805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:115760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.533 [2024-11-06 14:05:52.797815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.797833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:115768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.533 [2024-11-06 14:05:52.797842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.797861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:115776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.533 [2024-11-06 14:05:52.797870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.797888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:115784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.533 [2024-11-06 14:05:52.797897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.797916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:115792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.533 [2024-11-06 14:05:52.797925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.797946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:115800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.533 [2024-11-06 14:05:52.797956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.797974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:115808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.533 [2024-11-06 14:05:52.797983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.798002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:115816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.533 [2024-11-06 14:05:52.798011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.798030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:115112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.533 [2024-11-06 14:05:52.798039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.798057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:115120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.533 [2024-11-06 14:05:52.798067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.798085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.533 [2024-11-06 14:05:52.798094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.798112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:115136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.533 [2024-11-06 14:05:52.798121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.798140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.533 [2024-11-06 14:05:52.798149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.798168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:115152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.533 [2024-11-06 14:05:52.798176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.798194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:115160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.533 [2024-11-06 14:05:52.798203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.798222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:115168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.533 [2024-11-06 14:05:52.798231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.798254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:115824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.533 [2024-11-06 14:05:52.798264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.798282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.533 [2024-11-06 14:05:52.798297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.798316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:115840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.533 [2024-11-06 14:05:52.798325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.798344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:115848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.533 [2024-11-06 14:05:52.798353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.798372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:115856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.533 [2024-11-06 14:05:52.798380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.798399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:115864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.533 [2024-11-06 14:05:52.798408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.798427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:115872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.533 [2024-11-06 14:05:52.798436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.798510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:115880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.533 [2024-11-06 14:05:52.798520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:27.533 [2024-11-06 14:05:52.798541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:115176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.533 [2024-11-06 14:05:52.798550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.798571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.798580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.798600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:115192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.798609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.798628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:115200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.798638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.798658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:115208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.798667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.798687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:115216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.798699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.798719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:115224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.798728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.798749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:115232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.798759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.798781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:115240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.798791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.798813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:115248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.798823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.798843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:115256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.798852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.798873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:115264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.798882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.798901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:115272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.798910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.798930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:115280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.798939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.798960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:115288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.798969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.798989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:115296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.798999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.799021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:115304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.799031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.799051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:115312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.799062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.799084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:115320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.799093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.799114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:115328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.799124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.799144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:115336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.799154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.799175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:115344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.799184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.799204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:115352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.799213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.799233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:115360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.799242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.799267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:115368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.799276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.799296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:115376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.799305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.799326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:115384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.799335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.799355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:115392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.799364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.799384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:115400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.799393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.799413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.799422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.799444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:115416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.799454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.799474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:115424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.799482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.799503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:115432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.799512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.799532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:115440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.799541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.799561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:115448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.799570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.799590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.799599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.799619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:115464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.799628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.799649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:115472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.799658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.799679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:115480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.799688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:27.534 [2024-11-06 14:05:52.799709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:115488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.534 [2024-11-06 14:05:52.799718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:27.535 [2024-11-06 14:05:52.799740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:115496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.535 [2024-11-06 14:05:52.799749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:27.535 [2024-11-06 14:05:52.799769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:115504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.535 [2024-11-06 14:05:52.799778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:27.535 [2024-11-06 14:05:52.799800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:115512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.535 [2024-11-06 14:05:52.799810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:27.535 [2024-11-06 14:05:52.799830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:115520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.535 [2024-11-06 14:05:52.799839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:27.535 [2024-11-06 14:05:52.799859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:115528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.535 [2024-11-06 14:05:52.799869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:27.535 [2024-11-06 14:05:52.799889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:115536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.535 [2024-11-06 14:05:52.799899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:27.535 [2024-11-06 14:05:52.799919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:115544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.535 [2024-11-06 14:05:52.799928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:27.535 [2024-11-06 14:05:52.799948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:115552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.535 [2024-11-06 14:05:52.799957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:27.535 [2024-11-06 14:05:52.799977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:115888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.535 [2024-11-06 14:05:52.799986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:27.535 [2024-11-06 14:05:52.800007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:115896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.535 [2024-11-06 14:05:52.800016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:27.535 [2024-11-06 14:05:52.800036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:115904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.535 [2024-11-06 14:05:52.800045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:27.535 [2024-11-06 14:05:52.800065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:115912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.535 [2024-11-06 14:05:52.800074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:27.535 [2024-11-06 14:05:52.800094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:115920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.535 [2024-11-06 14:05:52.800104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:27.535 [2024-11-06 14:05:52.800124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:115928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.535 [2024-11-06 14:05:52.800133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:27.535 [2024-11-06 14:05:52.800154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:115936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.535 [2024-11-06 14:05:52.800165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:27.535 12120.42 IOPS, 47.35 MiB/s [2024-11-06T13:06:06.819Z] 11188.08 IOPS, 43.70 MiB/s [2024-11-06T13:06:06.819Z] 10388.93 IOPS, 40.58 MiB/s [2024-11-06T13:06:06.819Z] 10317.93 IOPS, 40.30 MiB/s [2024-11-06T13:06:06.819Z] 10480.25 IOPS, 40.94 MiB/s [2024-11-06T13:06:06.819Z] 10834.71 IOPS, 42.32 MiB/s [2024-11-06T13:06:06.819Z] 11179.22 IOPS, 43.67 MiB/s [2024-11-06T13:06:06.819Z] 11339.89 IOPS, 44.30 MiB/s [2024-11-06T13:06:06.819Z] 11411.90 IOPS, 44.58 MiB/s [2024-11-06T13:06:06.819Z] 11512.33 IOPS, 44.97 MiB/s [2024-11-06T13:06:06.819Z] 11762.27 IOPS, 45.95 MiB/s [2024-11-06T13:06:06.819Z] 11977.78 IOPS, 46.79 MiB/s [2024-11-06T13:06:06.819Z] [2024-11-06 14:06:04.623175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:116560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.535 [2024-11-06 14:06:04.623208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:27.535 [2024-11-06 14:06:04.623252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:116576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.535 [2024-11-06 14:06:04.623262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:27.535 [2024-11-06 14:06:04.623276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:116592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.535 [2024-11-06 14:06:04.623283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:27.535 [2024-11-06 14:06:04.623298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:116608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.535 [2024-11-06 14:06:04.623306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:27.535 [2024-11-06 14:06:04.623320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.535 [2024-11-06 14:06:04.623327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:27.535 [2024-11-06 14:06:04.623342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:116640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.535 [2024-11-06 14:06:04.623350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:27.535 [2024-11-06 14:06:04.623364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:116656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.535 [2024-11-06 14:06:04.623372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:27.535 [2024-11-06 14:06:04.623387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:116672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.535 [2024-11-06 14:06:04.623395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:27.535 [2024-11-06 14:06:04.623409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:116688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.535 [2024-11-06 14:06:04.623417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:27.535 [2024-11-06 14:06:04.623432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:116704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.535 [2024-11-06 14:06:04.623439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:27.535 [2024-11-06 14:06:04.623453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:116720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.535 [2024-11-06 14:06:04.623467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:27.535 [2024-11-06 14:06:04.623482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:116736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.535 [2024-11-06 14:06:04.623490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:27.535 [2024-11-06 14:06:04.623505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:116752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.535 [2024-11-06 14:06:04.623513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:27.535 [2024-11-06 14:06:04.623527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:116768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.535 [2024-11-06 14:06:04.623535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.535 [2024-11-06 14:06:04.623547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:116784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.535 [2024-11-06 14:06:04.623555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:27.535 [2024-11-06 14:06:04.623569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:116800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.535 [2024-11-06 14:06:04.623576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:27.535 [2024-11-06 14:06:04.623589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:116816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.535 [2024-11-06 14:06:04.623596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:27.535 [2024-11-06 14:06:04.623611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:116832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.535 [2024-11-06 14:06:04.623620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:27.535 [2024-11-06 14:06:04.623635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:116848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.535 [2024-11-06 14:06:04.623644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:27.535 [2024-11-06 14:06:04.623660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:116864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.535 [2024-11-06 14:06:04.623669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:27.535 [2024-11-06 14:06:04.623685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:116880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.535 [2024-11-06 14:06:04.623693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:27.535 [2024-11-06 14:06:04.623709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:116896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.535 [2024-11-06 14:06:04.623718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:27.536 [2024-11-06 14:06:04.623734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:116912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.536 [2024-11-06 14:06:04.623746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:27.536 [2024-11-06 14:06:04.623761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:116928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.536 [2024-11-06 14:06:04.623770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:27.536 [2024-11-06 14:06:04.623785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:116944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.536 [2024-11-06 14:06:04.623794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:27.536 [2024-11-06 14:06:04.623810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:116960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.536 [2024-11-06 14:06:04.623818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:27.536 [2024-11-06 14:06:04.623834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:116976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.536 [2024-11-06 14:06:04.623842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:27.536 [2024-11-06 14:06:04.623859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:116992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.536 [2024-11-06 14:06:04.623867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:27.536 [2024-11-06 14:06:04.624237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:117008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.536 [2024-11-06 14:06:04.624259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:27.536 [2024-11-06 14:06:04.624278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:117024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.536 [2024-11-06 14:06:04.624288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:27.536 [2024-11-06 14:06:04.624303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:117040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.536 [2024-11-06 14:06:04.624312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:27.536 [2024-11-06 14:06:04.624326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:117056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.536 [2024-11-06 14:06:04.624334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:27.536 [2024-11-06 14:06:04.624349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:117072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.536 [2024-11-06 14:06:04.624358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:27.536 [2024-11-06 14:06:04.624374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:117088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.536 [2024-11-06 14:06:04.624383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:27.536 [2024-11-06 14:06:04.624398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:117104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.536 [2024-11-06 14:06:04.624411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:27.536 [2024-11-06 14:06:04.624427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:116168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.536 [2024-11-06 14:06:04.624436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:27.536 [2024-11-06 14:06:04.624451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:116200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.536 [2024-11-06 14:06:04.624460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:27.536 [2024-11-06 14:06:04.624475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:116232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.536 [2024-11-06 14:06:04.624484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:27.536 [2024-11-06 14:06:04.624500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:116264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.536 [2024-11-06 14:06:04.624508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:27.536 [2024-11-06 14:06:04.624524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:116296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.536 [2024-11-06 14:06:04.624532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:27.536 [2024-11-06 14:06:04.624548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:116328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.536 [2024-11-06 14:06:04.624556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:27.536 [2024-11-06 14:06:04.624572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:116360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.536 [2024-11-06 14:06:04.624580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:27.536 [2024-11-06 14:06:04.624596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:116392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.536 [2024-11-06 14:06:04.624604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:27.536 [2024-11-06 14:06:04.624620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:116424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.536 [2024-11-06 14:06:04.624629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:27.536 [2024-11-06 14:06:04.624644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:116456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.536 [2024-11-06 14:06:04.624653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.536 [2024-11-06 14:06:04.624669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:116488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.536 [2024-11-06 14:06:04.624678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.536 [2024-11-06 14:06:04.624693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:116520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.536 [2024-11-06 14:06:04.624702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:27.536 [2024-11-06 14:06:04.624719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:117120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.536 [2024-11-06 14:06:04.624728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:27.536 [2024-11-06 14:06:04.624744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:117136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.536 [2024-11-06 14:06:04.624753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:27.536 [2024-11-06 14:06:04.624768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:117152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.536 [2024-11-06 14:06:04.624777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:27.536 [2024-11-06 14:06:04.624792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:117168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:27.536 [2024-11-06 14:06:04.624801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:27.536 [2024-11-06 14:06:04.624817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:116160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.536 [2024-11-06 14:06:04.624825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:27.536 [2024-11-06 14:06:04.624841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:116192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.536 [2024-11-06 14:06:04.624849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:27.536 [2024-11-06 14:06:04.624865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:116224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.536 [2024-11-06 14:06:04.624873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:27.536 [2024-11-06 14:06:04.624889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:116256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.536 [2024-11-06 14:06:04.624898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:27.536 [2024-11-06 14:06:04.624913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:116288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.537 [2024-11-06 14:06:04.624922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:27.537 [2024-11-06 14:06:04.624938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:116320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.537 [2024-11-06 14:06:04.624947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:27.537 [2024-11-06 14:06:04.624962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:116352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.537 [2024-11-06 14:06:04.624971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:27.537 [2024-11-06 14:06:04.624987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:116384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.537 [2024-11-06 14:06:04.624996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:27.537 [2024-11-06 14:06:04.625013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:116416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.537 [2024-11-06 14:06:04.625022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:27.537 [2024-11-06 14:06:04.625037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:116448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.537 [2024-11-06 14:06:04.625046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:27.537 [2024-11-06 14:06:04.625061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:116480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.537 [2024-11-06 14:06:04.625070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:27.537 [2024-11-06 14:06:04.625086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:116512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.537 [2024-11-06 14:06:04.625095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:27.537 [2024-11-06 14:06:04.625111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:116544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.537 [2024-11-06 14:06:04.625120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:27.537 [2024-11-06 14:06:04.625468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:116568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.537 [2024-11-06 14:06:04.625480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:27.537 [2024-11-06 14:06:04.625497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:116600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.537 [2024-11-06 14:06:04.625506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:27.537 [2024-11-06 14:06:04.625522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:116632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.537 [2024-11-06 14:06:04.625531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:27.537 [2024-11-06 14:06:04.625546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.537 [2024-11-06 14:06:04.625554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:27.537 [2024-11-06 14:06:04.625570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:116696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.537 [2024-11-06 14:06:04.625578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:27.537 [2024-11-06 14:06:04.625593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:116728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.537 [2024-11-06 14:06:04.625602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:27.537 [2024-11-06 14:06:04.625617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:116760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.537 [2024-11-06 14:06:04.625626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:27.537 [2024-11-06 14:06:04.625643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:116792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.537 [2024-11-06 14:06:04.625652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:27.537 [2024-11-06 14:06:04.625667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:116824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.537 [2024-11-06 14:06:04.625676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:27.537 [2024-11-06 14:06:04.625692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.537 [2024-11-06 14:06:04.625700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:27.537 [2024-11-06 14:06:04.625716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:116888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.537 [2024-11-06 14:06:04.625725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:27.537 [2024-11-06 14:06:04.625740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:116920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.537 [2024-11-06 14:06:04.625749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:27.537 [2024-11-06 14:06:04.625765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:116952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.537 [2024-11-06 14:06:04.625774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:27.537 [2024-11-06 14:06:04.625789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:116984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.537 [2024-11-06 14:06:04.625798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.537 12032.00 IOPS, 47.00 MiB/s [2024-11-06T13:06:06.821Z] 12062.32 IOPS, 47.12 MiB/s [2024-11-06T13:06:06.821Z] Received shutdown signal, test time was about 25.127119 seconds 00:22:27.537 00:22:27.537 Latency(us) 00:22:27.537 [2024-11-06T13:06:06.821Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.537 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:27.537 Verification LBA range: start 0x0 length 0x4000 00:22:27.537 Nvme0n1 : 25.13 12063.24 47.12 0.00 0.00 10592.22 887.47 3019898.88 00:22:27.537 [2024-11-06T13:06:06.821Z] =================================================================================================================== 00:22:27.537 [2024-11-06T13:06:06.821Z] Total : 12063.24 47.12 0.00 0.00 10592.22 887.47 3019898.88 00:22:27.537 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:27.797 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:22:27.797 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:27.797 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:22:27.797 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:27.797 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:22:27.797 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:27.797 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:22:27.797 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:27.797 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:27.797 rmmod nvme_tcp 00:22:27.797 rmmod nvme_fabrics 00:22:27.797 rmmod nvme_keyring 00:22:27.797 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:27.797 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:22:27.797 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:22:27.797 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 984470 ']' 00:22:27.797 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 984470 00:22:27.797 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 984470 ']' 00:22:27.797 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 984470 00:22:27.797 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:22:27.797 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:27.797 14:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 984470 00:22:27.797 14:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:27.797 14:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:27.797 14:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 984470' 00:22:27.797 killing process with pid 984470 00:22:27.797 14:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 984470 00:22:27.797 14:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 984470 00:22:28.056 14:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:28.056 14:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:28.056 14:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:28.056 14:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:22:28.056 14:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:22:28.056 14:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:22:28.056 14:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:28.056 14:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:28.056 14:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:28.056 14:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.056 14:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:28.056 14:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.960 14:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:29.960 00:22:29.960 real 0m36.776s 00:22:29.960 user 1m37.841s 00:22:29.960 sys 0m8.826s 00:22:29.960 14:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:29.960 14:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:29.960 ************************************ 00:22:29.960 END TEST nvmf_host_multipath_status 00:22:29.960 ************************************ 00:22:29.960 14:06:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:29.960 14:06:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:29.960 14:06:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:29.960 14:06:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.960 ************************************ 00:22:29.960 START TEST nvmf_discovery_remove_ifc 00:22:29.960 ************************************ 00:22:29.960 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:30.219 * Looking for test storage... 00:22:30.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:30.219 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:30.219 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:22:30.219 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:30.219 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:30.219 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:30.219 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:30.219 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:30.219 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:22:30.219 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:22:30.219 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:22:30.219 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:22:30.219 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:22:30.219 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:22:30.219 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:22:30.219 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:30.219 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:22:30.219 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:22:30.219 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:30.219 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:30.219 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:30.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.220 --rc genhtml_branch_coverage=1 00:22:30.220 --rc genhtml_function_coverage=1 00:22:30.220 --rc genhtml_legend=1 00:22:30.220 --rc geninfo_all_blocks=1 00:22:30.220 --rc geninfo_unexecuted_blocks=1 00:22:30.220 00:22:30.220 ' 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:30.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.220 --rc genhtml_branch_coverage=1 00:22:30.220 --rc genhtml_function_coverage=1 00:22:30.220 --rc genhtml_legend=1 00:22:30.220 --rc geninfo_all_blocks=1 00:22:30.220 --rc geninfo_unexecuted_blocks=1 00:22:30.220 00:22:30.220 ' 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:30.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.220 --rc genhtml_branch_coverage=1 00:22:30.220 --rc genhtml_function_coverage=1 00:22:30.220 --rc genhtml_legend=1 00:22:30.220 --rc geninfo_all_blocks=1 00:22:30.220 --rc geninfo_unexecuted_blocks=1 00:22:30.220 00:22:30.220 ' 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:30.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.220 --rc genhtml_branch_coverage=1 00:22:30.220 --rc genhtml_function_coverage=1 00:22:30.220 --rc genhtml_legend=1 00:22:30.220 --rc geninfo_all_blocks=1 00:22:30.220 --rc geninfo_unexecuted_blocks=1 00:22:30.220 00:22:30.220 ' 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:30.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:22:30.220 14:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:35.494 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:35.494 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:35.494 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:35.495 Found net devices under 0000:31:00.0: cvl_0_0 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:35.495 Found net devices under 0000:31:00.1: cvl_0_1 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:35.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:35.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:22:35.495 00:22:35.495 --- 10.0.0.2 ping statistics --- 00:22:35.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.495 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:35.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:35.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:22:35.495 00:22:35.495 --- 10.0.0.1 ping statistics --- 00:22:35.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.495 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=995913 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 995913 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 995913 ']' 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:35.495 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:35.495 [2024-11-06 14:06:14.676393] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:22:35.495 [2024-11-06 14:06:14.676442] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:35.495 [2024-11-06 14:06:14.748743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.754 [2024-11-06 14:06:14.777404] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:35.754 [2024-11-06 14:06:14.777431] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:35.754 [2024-11-06 14:06:14.777436] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:35.754 [2024-11-06 14:06:14.777441] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:35.754 [2024-11-06 14:06:14.777445] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:35.754 [2024-11-06 14:06:14.777919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.754 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:35.754 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:22:35.754 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:35.754 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:35.754 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:35.754 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:35.754 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:35.754 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.754 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:35.754 [2024-11-06 14:06:14.888897] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:35.754 [2024-11-06 14:06:14.897059] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:35.754 null0 00:22:35.754 [2024-11-06 14:06:14.929067] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.754 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.754 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=995940 00:22:35.754 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 995940 /tmp/host.sock 00:22:35.754 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 995940 ']' 00:22:35.754 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:22:35.754 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:35.754 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:35.755 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:35.755 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:35.755 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:35.755 14:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:35.755 [2024-11-06 14:06:14.985073] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:22:35.755 [2024-11-06 14:06:14.985122] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid995940 ] 00:22:36.014 [2024-11-06 14:06:15.050030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.014 [2024-11-06 14:06:15.080111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.014 14:06:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:36.014 14:06:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:22:36.014 14:06:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:36.014 14:06:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:36.014 14:06:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.014 14:06:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:36.014 14:06:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.014 14:06:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:36.014 14:06:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.014 14:06:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:36.014 14:06:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.014 14:06:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:36.014 14:06:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.014 14:06:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:36.952 [2024-11-06 14:06:16.194751] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:36.952 [2024-11-06 14:06:16.194773] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:36.952 [2024-11-06 14:06:16.194783] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:37.212 [2024-11-06 14:06:16.283032] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:37.212 [2024-11-06 14:06:16.383940] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:22:37.212 [2024-11-06 14:06:16.384769] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xbcb690:1 started. 00:22:37.212 [2024-11-06 14:06:16.385979] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:37.212 [2024-11-06 14:06:16.386016] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:37.212 [2024-11-06 14:06:16.386033] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:37.212 [2024-11-06 14:06:16.386045] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:37.213 [2024-11-06 14:06:16.386064] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:37.213 14:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.213 14:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:37.213 14:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:37.213 14:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:37.213 14:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.213 14:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:37.213 14:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:37.213 14:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:37.213 14:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:37.213 [2024-11-06 14:06:16.394455] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xbcb690 was disconnected and freed. delete nvme_qpair. 00:22:37.213 14:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.213 14:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:37.213 14:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:22:37.213 14:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:22:37.473 14:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:37.473 14:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:37.473 14:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:37.473 14:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:37.473 14:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:37.473 14:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.473 14:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:37.473 14:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:37.473 14:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.473 14:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:37.473 14:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:38.411 14:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:38.411 14:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:38.411 14:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.411 14:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:38.411 14:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:38.411 14:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:38.411 14:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:38.411 14:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.411 14:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:38.411 14:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:39.349 14:06:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:39.349 14:06:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:39.349 14:06:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:39.349 14:06:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:39.350 14:06:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.350 14:06:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:39.350 14:06:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:39.350 14:06:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.350 14:06:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:39.350 14:06:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:40.727 14:06:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:40.727 14:06:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:40.727 14:06:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:40.727 14:06:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:40.727 14:06:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:40.727 14:06:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.727 14:06:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:40.727 14:06:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.727 14:06:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:40.727 14:06:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:41.665 14:06:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:41.665 14:06:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:41.665 14:06:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.665 14:06:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:41.665 14:06:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:41.665 14:06:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:41.665 14:06:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:41.665 14:06:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.665 14:06:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:41.665 14:06:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:42.602 14:06:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:42.602 14:06:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:42.602 14:06:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.602 14:06:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:42.602 14:06:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:42.602 14:06:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:42.602 14:06:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:42.602 14:06:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.602 14:06:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:42.602 14:06:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:42.602 [2024-11-06 14:06:21.827080] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:42.602 [2024-11-06 14:06:21.827124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.602 [2024-11-06 14:06:21.827134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.602 [2024-11-06 14:06:21.827141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.602 [2024-11-06 14:06:21.827146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.602 [2024-11-06 14:06:21.827152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.602 [2024-11-06 14:06:21.827157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.602 [2024-11-06 14:06:21.827163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.602 [2024-11-06 14:06:21.827168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.602 [2024-11-06 14:06:21.827174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.602 [2024-11-06 14:06:21.827179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.602 [2024-11-06 14:06:21.827184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba8050 is same with the state(6) to be set 00:22:42.602 [2024-11-06 14:06:21.837100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba8050 (9): Bad file descriptor 00:22:42.602 [2024-11-06 14:06:21.847135] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:42.602 [2024-11-06 14:06:21.847144] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:42.602 [2024-11-06 14:06:21.847147] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:42.602 [2024-11-06 14:06:21.847151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:42.602 [2024-11-06 14:06:21.847170] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:43.539 14:06:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:43.539 14:06:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:43.539 14:06:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.539 14:06:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:43.539 14:06:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:43.539 14:06:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:43.539 14:06:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:43.799 [2024-11-06 14:06:22.858286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:43.799 [2024-11-06 14:06:22.858355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba8050 with addr=10.0.0.2, port=4420 00:22:43.799 [2024-11-06 14:06:22.858377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba8050 is same with the state(6) to be set 00:22:43.799 [2024-11-06 14:06:22.858419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba8050 (9): Bad file descriptor 00:22:43.799 [2024-11-06 14:06:22.859398] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:22:43.799 [2024-11-06 14:06:22.859480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:43.799 [2024-11-06 14:06:22.859504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:43.799 [2024-11-06 14:06:22.859526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:43.799 [2024-11-06 14:06:22.859546] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:43.799 [2024-11-06 14:06:22.859562] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:43.799 [2024-11-06 14:06:22.859576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:43.799 [2024-11-06 14:06:22.859597] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:43.799 [2024-11-06 14:06:22.859612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:43.799 14:06:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.799 14:06:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:43.799 14:06:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:44.736 [2024-11-06 14:06:23.862032] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:44.736 [2024-11-06 14:06:23.862048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:44.736 [2024-11-06 14:06:23.862056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:44.736 [2024-11-06 14:06:23.862061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:44.736 [2024-11-06 14:06:23.862066] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:22:44.736 [2024-11-06 14:06:23.862071] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:44.736 [2024-11-06 14:06:23.862075] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:44.736 [2024-11-06 14:06:23.862078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:44.736 [2024-11-06 14:06:23.862095] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:44.736 [2024-11-06 14:06:23.862112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:44.736 [2024-11-06 14:06:23.862119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.736 [2024-11-06 14:06:23.862126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:44.736 [2024-11-06 14:06:23.862131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.736 [2024-11-06 14:06:23.862136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:44.736 [2024-11-06 14:06:23.862141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.736 [2024-11-06 14:06:23.862147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:44.736 [2024-11-06 14:06:23.862152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.736 [2024-11-06 14:06:23.862157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:44.736 [2024-11-06 14:06:23.862166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.736 [2024-11-06 14:06:23.862171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:22:44.736 [2024-11-06 14:06:23.862494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb97380 (9): Bad file descriptor 00:22:44.736 [2024-11-06 14:06:23.863503] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:44.736 [2024-11-06 14:06:23.863511] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:22:44.736 14:06:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:44.736 14:06:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:44.736 14:06:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:44.736 14:06:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:44.736 14:06:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:44.736 14:06:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.736 14:06:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:44.736 14:06:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.736 14:06:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:44.736 14:06:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:44.736 14:06:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:44.737 14:06:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:44.737 14:06:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:44.737 14:06:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:44.737 14:06:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:44.737 14:06:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:44.737 14:06:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:44.737 14:06:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.737 14:06:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:44.737 14:06:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.737 14:06:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:44.737 14:06:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:46.116 14:06:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:46.116 14:06:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:46.116 14:06:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:46.116 14:06:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:46.116 14:06:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:46.116 14:06:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.116 14:06:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:46.116 14:06:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.116 14:06:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:46.116 14:06:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:46.683 [2024-11-06 14:06:25.920437] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:46.683 [2024-11-06 14:06:25.920454] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:46.683 [2024-11-06 14:06:25.920465] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:46.943 [2024-11-06 14:06:26.008719] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:46.943 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:46.943 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:46.943 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:46.943 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:46.943 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:46.943 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.943 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:46.943 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.943 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:46.943 14:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:46.943 [2024-11-06 14:06:26.188792] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:22:46.943 [2024-11-06 14:06:26.189618] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xbb26c0:1 started. 00:22:46.943 [2024-11-06 14:06:26.190553] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:46.943 [2024-11-06 14:06:26.190583] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:46.943 [2024-11-06 14:06:26.190598] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:46.943 [2024-11-06 14:06:26.190609] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:46.943 [2024-11-06 14:06:26.190616] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:46.943 [2024-11-06 14:06:26.198658] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xbb26c0 was disconnected and freed. delete nvme_qpair. 00:22:47.881 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:47.881 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:47.881 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:47.881 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:47.881 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:47.881 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.881 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:47.881 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.881 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:47.881 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:47.881 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 995940 00:22:47.881 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 995940 ']' 00:22:47.881 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 995940 00:22:47.881 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:22:47.881 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:47.881 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 995940 00:22:48.142 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:48.142 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:48.142 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 995940' 00:22:48.142 killing process with pid 995940 00:22:48.142 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 995940 00:22:48.142 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 995940 00:22:48.142 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:48.142 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:48.142 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:22:48.142 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:48.142 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:22:48.142 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:48.142 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:48.142 rmmod nvme_tcp 00:22:48.142 rmmod nvme_fabrics 00:22:48.142 rmmod nvme_keyring 00:22:48.142 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:48.142 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:22:48.142 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:22:48.142 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 995913 ']' 00:22:48.143 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 995913 00:22:48.143 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 995913 ']' 00:22:48.143 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 995913 00:22:48.143 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:22:48.143 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:48.143 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 995913 00:22:48.143 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:48.143 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:48.143 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 995913' 00:22:48.143 killing process with pid 995913 00:22:48.143 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 995913 00:22:48.143 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 995913 00:22:48.402 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:48.402 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:48.402 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:48.402 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:22:48.402 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:22:48.402 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:48.402 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:22:48.402 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:48.402 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:48.402 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.402 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:48.402 14:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.312 14:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:50.312 00:22:50.312 real 0m20.280s 00:22:50.312 user 0m25.558s 00:22:50.312 sys 0m5.001s 00:22:50.312 14:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:50.312 14:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:50.312 ************************************ 00:22:50.312 END TEST nvmf_discovery_remove_ifc 00:22:50.312 ************************************ 00:22:50.312 14:06:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:50.312 14:06:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:50.312 14:06:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:50.312 14:06:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.312 ************************************ 00:22:50.312 START TEST nvmf_identify_kernel_target 00:22:50.312 ************************************ 00:22:50.312 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:50.575 * Looking for test storage... 00:22:50.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:50.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.575 --rc genhtml_branch_coverage=1 00:22:50.575 --rc genhtml_function_coverage=1 00:22:50.575 --rc genhtml_legend=1 00:22:50.575 --rc geninfo_all_blocks=1 00:22:50.575 --rc geninfo_unexecuted_blocks=1 00:22:50.575 00:22:50.575 ' 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:50.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.575 --rc genhtml_branch_coverage=1 00:22:50.575 --rc genhtml_function_coverage=1 00:22:50.575 --rc genhtml_legend=1 00:22:50.575 --rc geninfo_all_blocks=1 00:22:50.575 --rc geninfo_unexecuted_blocks=1 00:22:50.575 00:22:50.575 ' 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:50.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.575 --rc genhtml_branch_coverage=1 00:22:50.575 --rc genhtml_function_coverage=1 00:22:50.575 --rc genhtml_legend=1 00:22:50.575 --rc geninfo_all_blocks=1 00:22:50.575 --rc geninfo_unexecuted_blocks=1 00:22:50.575 00:22:50.575 ' 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:50.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.575 --rc genhtml_branch_coverage=1 00:22:50.575 --rc genhtml_function_coverage=1 00:22:50.575 --rc genhtml_legend=1 00:22:50.575 --rc geninfo_all_blocks=1 00:22:50.575 --rc geninfo_unexecuted_blocks=1 00:22:50.575 00:22:50.575 ' 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:50.575 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:50.576 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.576 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.576 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.576 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:22:50.576 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.576 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:22:50.576 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:50.576 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:50.576 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:50.576 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:50.576 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:50.576 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:50.576 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:50.576 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:50.576 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:50.576 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:50.576 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:22:50.576 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:50.576 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:50.576 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:50.576 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:50.576 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:50.576 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.576 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:50.576 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.576 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:50.576 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:50.576 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:22:50.576 14:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:55.852 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.852 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:55.853 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:55.853 Found net devices under 0000:31:00.0: cvl_0_0 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:55.853 Found net devices under 0000:31:00.1: cvl_0_1 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:55.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:55.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:22:55.853 00:22:55.853 --- 10.0.0.2 ping statistics --- 00:22:55.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.853 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:55.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:55.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:22:55.853 00:22:55.853 --- 10.0.0.1 ping statistics --- 00:22:55.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.853 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:55.853 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:55.854 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:22:55.854 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:22:55.854 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:22:55.854 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:55.854 14:06:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:58.391 Waiting for block devices as requested 00:22:58.391 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:22:58.391 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:22:58.391 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:22:58.391 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:22:58.391 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:22:58.391 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:22:58.391 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:22:58.391 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:22:58.391 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:22:58.650 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:22:58.650 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:22:58.650 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:22:58.910 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:22:58.910 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:22:58.910 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:22:58.910 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:22:59.171 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:22:59.171 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:59.171 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:59.171 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:22:59.171 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:22:59.171 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:59.171 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:59.171 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:22:59.171 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:22:59.171 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:59.171 No valid GPT data, bailing 00:22:59.171 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:59.171 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:22:59.171 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:22:59.171 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:22:59.171 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:22:59.171 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:59.171 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:59.171 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:59.171 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:59.171 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:22:59.171 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:22:59.171 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:22:59.171 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:22:59.171 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:22:59.171 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:22:59.171 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:22:59.171 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:59.171 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.1 -t tcp -s 4420 00:22:59.171 00:22:59.171 Discovery Log Number of Records 2, Generation counter 2 00:22:59.171 =====Discovery Log Entry 0====== 00:22:59.171 trtype: tcp 00:22:59.171 adrfam: ipv4 00:22:59.171 subtype: current discovery subsystem 00:22:59.171 treq: not specified, sq flow control disable supported 00:22:59.171 portid: 1 00:22:59.171 trsvcid: 4420 00:22:59.171 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:59.171 traddr: 10.0.0.1 00:22:59.171 eflags: none 00:22:59.171 sectype: none 00:22:59.171 =====Discovery Log Entry 1====== 00:22:59.171 trtype: tcp 00:22:59.171 adrfam: ipv4 00:22:59.171 subtype: nvme subsystem 00:22:59.171 treq: not specified, sq flow control disable supported 00:22:59.171 portid: 1 00:22:59.171 trsvcid: 4420 00:22:59.171 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:59.171 traddr: 10.0.0.1 00:22:59.171 eflags: none 00:22:59.171 sectype: none 00:22:59.171 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:22:59.171 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:22:59.171 ===================================================== 00:22:59.171 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:59.171 ===================================================== 00:22:59.171 Controller Capabilities/Features 00:22:59.171 ================================ 00:22:59.171 Vendor ID: 0000 00:22:59.171 Subsystem Vendor ID: 0000 00:22:59.172 Serial Number: ac85abcd7d4357c5efed 00:22:59.172 Model Number: Linux 00:22:59.172 Firmware Version: 6.8.9-20 00:22:59.172 Recommended Arb Burst: 0 00:22:59.172 IEEE OUI Identifier: 00 00 00 00:22:59.172 Multi-path I/O 00:22:59.172 May have multiple subsystem ports: No 00:22:59.172 May have multiple controllers: No 00:22:59.172 Associated with SR-IOV VF: No 00:22:59.172 Max Data Transfer Size: Unlimited 00:22:59.172 Max Number of Namespaces: 0 00:22:59.172 Max Number of I/O Queues: 1024 00:22:59.172 NVMe Specification Version (VS): 1.3 00:22:59.172 NVMe Specification Version (Identify): 1.3 00:22:59.172 Maximum Queue Entries: 1024 00:22:59.172 Contiguous Queues Required: No 00:22:59.172 Arbitration Mechanisms Supported 00:22:59.172 Weighted Round Robin: Not Supported 00:22:59.172 Vendor Specific: Not Supported 00:22:59.172 Reset Timeout: 7500 ms 00:22:59.172 Doorbell Stride: 4 bytes 00:22:59.172 NVM Subsystem Reset: Not Supported 00:22:59.172 Command Sets Supported 00:22:59.172 NVM Command Set: Supported 00:22:59.172 Boot Partition: Not Supported 00:22:59.172 Memory Page Size Minimum: 4096 bytes 00:22:59.172 Memory Page Size Maximum: 4096 bytes 00:22:59.172 Persistent Memory Region: Not Supported 00:22:59.172 Optional Asynchronous Events Supported 00:22:59.172 Namespace Attribute Notices: Not Supported 00:22:59.172 Firmware Activation Notices: Not Supported 00:22:59.172 ANA Change Notices: Not Supported 00:22:59.172 PLE Aggregate Log Change Notices: Not Supported 00:22:59.172 LBA Status Info Alert Notices: Not Supported 00:22:59.172 EGE Aggregate Log Change Notices: Not Supported 00:22:59.172 Normal NVM Subsystem Shutdown event: Not Supported 00:22:59.172 Zone Descriptor Change Notices: Not Supported 00:22:59.172 Discovery Log Change Notices: Supported 00:22:59.172 Controller Attributes 00:22:59.172 128-bit Host Identifier: Not Supported 00:22:59.172 Non-Operational Permissive Mode: Not Supported 00:22:59.172 NVM Sets: Not Supported 00:22:59.172 Read Recovery Levels: Not Supported 00:22:59.172 Endurance Groups: Not Supported 00:22:59.172 Predictable Latency Mode: Not Supported 00:22:59.172 Traffic Based Keep ALive: Not Supported 00:22:59.172 Namespace Granularity: Not Supported 00:22:59.172 SQ Associations: Not Supported 00:22:59.172 UUID List: Not Supported 00:22:59.172 Multi-Domain Subsystem: Not Supported 00:22:59.172 Fixed Capacity Management: Not Supported 00:22:59.172 Variable Capacity Management: Not Supported 00:22:59.172 Delete Endurance Group: Not Supported 00:22:59.172 Delete NVM Set: Not Supported 00:22:59.172 Extended LBA Formats Supported: Not Supported 00:22:59.172 Flexible Data Placement Supported: Not Supported 00:22:59.172 00:22:59.172 Controller Memory Buffer Support 00:22:59.172 ================================ 00:22:59.172 Supported: No 00:22:59.172 00:22:59.172 Persistent Memory Region Support 00:22:59.172 ================================ 00:22:59.172 Supported: No 00:22:59.172 00:22:59.172 Admin Command Set Attributes 00:22:59.172 ============================ 00:22:59.172 Security Send/Receive: Not Supported 00:22:59.172 Format NVM: Not Supported 00:22:59.172 Firmware Activate/Download: Not Supported 00:22:59.172 Namespace Management: Not Supported 00:22:59.172 Device Self-Test: Not Supported 00:22:59.172 Directives: Not Supported 00:22:59.172 NVMe-MI: Not Supported 00:22:59.172 Virtualization Management: Not Supported 00:22:59.172 Doorbell Buffer Config: Not Supported 00:22:59.172 Get LBA Status Capability: Not Supported 00:22:59.172 Command & Feature Lockdown Capability: Not Supported 00:22:59.172 Abort Command Limit: 1 00:22:59.172 Async Event Request Limit: 1 00:22:59.172 Number of Firmware Slots: N/A 00:22:59.172 Firmware Slot 1 Read-Only: N/A 00:22:59.172 Firmware Activation Without Reset: N/A 00:22:59.172 Multiple Update Detection Support: N/A 00:22:59.172 Firmware Update Granularity: No Information Provided 00:22:59.172 Per-Namespace SMART Log: No 00:22:59.172 Asymmetric Namespace Access Log Page: Not Supported 00:22:59.172 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:59.172 Command Effects Log Page: Not Supported 00:22:59.172 Get Log Page Extended Data: Supported 00:22:59.172 Telemetry Log Pages: Not Supported 00:22:59.172 Persistent Event Log Pages: Not Supported 00:22:59.172 Supported Log Pages Log Page: May Support 00:22:59.172 Commands Supported & Effects Log Page: Not Supported 00:22:59.172 Feature Identifiers & Effects Log Page:May Support 00:22:59.172 NVMe-MI Commands & Effects Log Page: May Support 00:22:59.172 Data Area 4 for Telemetry Log: Not Supported 00:22:59.172 Error Log Page Entries Supported: 1 00:22:59.172 Keep Alive: Not Supported 00:22:59.172 00:22:59.172 NVM Command Set Attributes 00:22:59.172 ========================== 00:22:59.172 Submission Queue Entry Size 00:22:59.172 Max: 1 00:22:59.172 Min: 1 00:22:59.172 Completion Queue Entry Size 00:22:59.172 Max: 1 00:22:59.172 Min: 1 00:22:59.172 Number of Namespaces: 0 00:22:59.172 Compare Command: Not Supported 00:22:59.172 Write Uncorrectable Command: Not Supported 00:22:59.172 Dataset Management Command: Not Supported 00:22:59.172 Write Zeroes Command: Not Supported 00:22:59.172 Set Features Save Field: Not Supported 00:22:59.172 Reservations: Not Supported 00:22:59.172 Timestamp: Not Supported 00:22:59.172 Copy: Not Supported 00:22:59.172 Volatile Write Cache: Not Present 00:22:59.172 Atomic Write Unit (Normal): 1 00:22:59.172 Atomic Write Unit (PFail): 1 00:22:59.172 Atomic Compare & Write Unit: 1 00:22:59.172 Fused Compare & Write: Not Supported 00:22:59.172 Scatter-Gather List 00:22:59.172 SGL Command Set: Supported 00:22:59.172 SGL Keyed: Not Supported 00:22:59.172 SGL Bit Bucket Descriptor: Not Supported 00:22:59.172 SGL Metadata Pointer: Not Supported 00:22:59.172 Oversized SGL: Not Supported 00:22:59.172 SGL Metadata Address: Not Supported 00:22:59.172 SGL Offset: Supported 00:22:59.172 Transport SGL Data Block: Not Supported 00:22:59.172 Replay Protected Memory Block: Not Supported 00:22:59.172 00:22:59.172 Firmware Slot Information 00:22:59.172 ========================= 00:22:59.172 Active slot: 0 00:22:59.172 00:22:59.172 00:22:59.172 Error Log 00:22:59.172 ========= 00:22:59.172 00:22:59.172 Active Namespaces 00:22:59.172 ================= 00:22:59.172 Discovery Log Page 00:22:59.172 ================== 00:22:59.172 Generation Counter: 2 00:22:59.172 Number of Records: 2 00:22:59.172 Record Format: 0 00:22:59.172 00:22:59.172 Discovery Log Entry 0 00:22:59.172 ---------------------- 00:22:59.172 Transport Type: 3 (TCP) 00:22:59.172 Address Family: 1 (IPv4) 00:22:59.172 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:59.172 Entry Flags: 00:22:59.172 Duplicate Returned Information: 0 00:22:59.172 Explicit Persistent Connection Support for Discovery: 0 00:22:59.172 Transport Requirements: 00:22:59.172 Secure Channel: Not Specified 00:22:59.172 Port ID: 1 (0x0001) 00:22:59.172 Controller ID: 65535 (0xffff) 00:22:59.172 Admin Max SQ Size: 32 00:22:59.172 Transport Service Identifier: 4420 00:22:59.172 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:59.173 Transport Address: 10.0.0.1 00:22:59.173 Discovery Log Entry 1 00:22:59.173 ---------------------- 00:22:59.173 Transport Type: 3 (TCP) 00:22:59.173 Address Family: 1 (IPv4) 00:22:59.173 Subsystem Type: 2 (NVM Subsystem) 00:22:59.173 Entry Flags: 00:22:59.173 Duplicate Returned Information: 0 00:22:59.173 Explicit Persistent Connection Support for Discovery: 0 00:22:59.173 Transport Requirements: 00:22:59.173 Secure Channel: Not Specified 00:22:59.173 Port ID: 1 (0x0001) 00:22:59.173 Controller ID: 65535 (0xffff) 00:22:59.173 Admin Max SQ Size: 32 00:22:59.173 Transport Service Identifier: 4420 00:22:59.173 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:22:59.173 Transport Address: 10.0.0.1 00:22:59.173 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:59.173 get_feature(0x01) failed 00:22:59.173 get_feature(0x02) failed 00:22:59.173 get_feature(0x04) failed 00:22:59.173 ===================================================== 00:22:59.173 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:59.173 ===================================================== 00:22:59.173 Controller Capabilities/Features 00:22:59.173 ================================ 00:22:59.173 Vendor ID: 0000 00:22:59.173 Subsystem Vendor ID: 0000 00:22:59.173 Serial Number: 4140d699720c447437e9 00:22:59.173 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:22:59.173 Firmware Version: 6.8.9-20 00:22:59.173 Recommended Arb Burst: 6 00:22:59.173 IEEE OUI Identifier: 00 00 00 00:22:59.173 Multi-path I/O 00:22:59.173 May have multiple subsystem ports: Yes 00:22:59.173 May have multiple controllers: Yes 00:22:59.173 Associated with SR-IOV VF: No 00:22:59.173 Max Data Transfer Size: Unlimited 00:22:59.173 Max Number of Namespaces: 1024 00:22:59.173 Max Number of I/O Queues: 128 00:22:59.173 NVMe Specification Version (VS): 1.3 00:22:59.173 NVMe Specification Version (Identify): 1.3 00:22:59.173 Maximum Queue Entries: 1024 00:22:59.173 Contiguous Queues Required: No 00:22:59.173 Arbitration Mechanisms Supported 00:22:59.173 Weighted Round Robin: Not Supported 00:22:59.173 Vendor Specific: Not Supported 00:22:59.173 Reset Timeout: 7500 ms 00:22:59.173 Doorbell Stride: 4 bytes 00:22:59.173 NVM Subsystem Reset: Not Supported 00:22:59.173 Command Sets Supported 00:22:59.173 NVM Command Set: Supported 00:22:59.173 Boot Partition: Not Supported 00:22:59.173 Memory Page Size Minimum: 4096 bytes 00:22:59.173 Memory Page Size Maximum: 4096 bytes 00:22:59.173 Persistent Memory Region: Not Supported 00:22:59.173 Optional Asynchronous Events Supported 00:22:59.173 Namespace Attribute Notices: Supported 00:22:59.173 Firmware Activation Notices: Not Supported 00:22:59.173 ANA Change Notices: Supported 00:22:59.173 PLE Aggregate Log Change Notices: Not Supported 00:22:59.173 LBA Status Info Alert Notices: Not Supported 00:22:59.173 EGE Aggregate Log Change Notices: Not Supported 00:22:59.173 Normal NVM Subsystem Shutdown event: Not Supported 00:22:59.173 Zone Descriptor Change Notices: Not Supported 00:22:59.173 Discovery Log Change Notices: Not Supported 00:22:59.173 Controller Attributes 00:22:59.173 128-bit Host Identifier: Supported 00:22:59.173 Non-Operational Permissive Mode: Not Supported 00:22:59.173 NVM Sets: Not Supported 00:22:59.173 Read Recovery Levels: Not Supported 00:22:59.173 Endurance Groups: Not Supported 00:22:59.173 Predictable Latency Mode: Not Supported 00:22:59.173 Traffic Based Keep ALive: Supported 00:22:59.173 Namespace Granularity: Not Supported 00:22:59.173 SQ Associations: Not Supported 00:22:59.173 UUID List: Not Supported 00:22:59.173 Multi-Domain Subsystem: Not Supported 00:22:59.173 Fixed Capacity Management: Not Supported 00:22:59.173 Variable Capacity Management: Not Supported 00:22:59.173 Delete Endurance Group: Not Supported 00:22:59.173 Delete NVM Set: Not Supported 00:22:59.173 Extended LBA Formats Supported: Not Supported 00:22:59.173 Flexible Data Placement Supported: Not Supported 00:22:59.173 00:22:59.173 Controller Memory Buffer Support 00:22:59.173 ================================ 00:22:59.173 Supported: No 00:22:59.173 00:22:59.173 Persistent Memory Region Support 00:22:59.173 ================================ 00:22:59.173 Supported: No 00:22:59.173 00:22:59.173 Admin Command Set Attributes 00:22:59.173 ============================ 00:22:59.173 Security Send/Receive: Not Supported 00:22:59.173 Format NVM: Not Supported 00:22:59.173 Firmware Activate/Download: Not Supported 00:22:59.173 Namespace Management: Not Supported 00:22:59.173 Device Self-Test: Not Supported 00:22:59.173 Directives: Not Supported 00:22:59.173 NVMe-MI: Not Supported 00:22:59.173 Virtualization Management: Not Supported 00:22:59.173 Doorbell Buffer Config: Not Supported 00:22:59.173 Get LBA Status Capability: Not Supported 00:22:59.173 Command & Feature Lockdown Capability: Not Supported 00:22:59.173 Abort Command Limit: 4 00:22:59.173 Async Event Request Limit: 4 00:22:59.173 Number of Firmware Slots: N/A 00:22:59.173 Firmware Slot 1 Read-Only: N/A 00:22:59.173 Firmware Activation Without Reset: N/A 00:22:59.173 Multiple Update Detection Support: N/A 00:22:59.173 Firmware Update Granularity: No Information Provided 00:22:59.173 Per-Namespace SMART Log: Yes 00:22:59.173 Asymmetric Namespace Access Log Page: Supported 00:22:59.173 ANA Transition Time : 10 sec 00:22:59.173 00:22:59.173 Asymmetric Namespace Access Capabilities 00:22:59.173 ANA Optimized State : Supported 00:22:59.173 ANA Non-Optimized State : Supported 00:22:59.173 ANA Inaccessible State : Supported 00:22:59.173 ANA Persistent Loss State : Supported 00:22:59.173 ANA Change State : Supported 00:22:59.173 ANAGRPID is not changed : No 00:22:59.173 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:22:59.173 00:22:59.173 ANA Group Identifier Maximum : 128 00:22:59.173 Number of ANA Group Identifiers : 128 00:22:59.173 Max Number of Allowed Namespaces : 1024 00:22:59.173 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:22:59.173 Command Effects Log Page: Supported 00:22:59.173 Get Log Page Extended Data: Supported 00:22:59.173 Telemetry Log Pages: Not Supported 00:22:59.173 Persistent Event Log Pages: Not Supported 00:22:59.173 Supported Log Pages Log Page: May Support 00:22:59.173 Commands Supported & Effects Log Page: Not Supported 00:22:59.173 Feature Identifiers & Effects Log Page:May Support 00:22:59.173 NVMe-MI Commands & Effects Log Page: May Support 00:22:59.173 Data Area 4 for Telemetry Log: Not Supported 00:22:59.173 Error Log Page Entries Supported: 128 00:22:59.173 Keep Alive: Supported 00:22:59.173 Keep Alive Granularity: 1000 ms 00:22:59.173 00:22:59.173 NVM Command Set Attributes 00:22:59.173 ========================== 00:22:59.173 Submission Queue Entry Size 00:22:59.173 Max: 64 00:22:59.173 Min: 64 00:22:59.173 Completion Queue Entry Size 00:22:59.173 Max: 16 00:22:59.173 Min: 16 00:22:59.173 Number of Namespaces: 1024 00:22:59.173 Compare Command: Not Supported 00:22:59.173 Write Uncorrectable Command: Not Supported 00:22:59.173 Dataset Management Command: Supported 00:22:59.173 Write Zeroes Command: Supported 00:22:59.173 Set Features Save Field: Not Supported 00:22:59.173 Reservations: Not Supported 00:22:59.173 Timestamp: Not Supported 00:22:59.173 Copy: Not Supported 00:22:59.173 Volatile Write Cache: Present 00:22:59.173 Atomic Write Unit (Normal): 1 00:22:59.173 Atomic Write Unit (PFail): 1 00:22:59.173 Atomic Compare & Write Unit: 1 00:22:59.173 Fused Compare & Write: Not Supported 00:22:59.173 Scatter-Gather List 00:22:59.173 SGL Command Set: Supported 00:22:59.173 SGL Keyed: Not Supported 00:22:59.173 SGL Bit Bucket Descriptor: Not Supported 00:22:59.173 SGL Metadata Pointer: Not Supported 00:22:59.173 Oversized SGL: Not Supported 00:22:59.173 SGL Metadata Address: Not Supported 00:22:59.173 SGL Offset: Supported 00:22:59.173 Transport SGL Data Block: Not Supported 00:22:59.173 Replay Protected Memory Block: Not Supported 00:22:59.173 00:22:59.173 Firmware Slot Information 00:22:59.173 ========================= 00:22:59.173 Active slot: 0 00:22:59.173 00:22:59.173 Asymmetric Namespace Access 00:22:59.173 =========================== 00:22:59.173 Change Count : 0 00:22:59.174 Number of ANA Group Descriptors : 1 00:22:59.174 ANA Group Descriptor : 0 00:22:59.174 ANA Group ID : 1 00:22:59.174 Number of NSID Values : 1 00:22:59.174 Change Count : 0 00:22:59.174 ANA State : 1 00:22:59.174 Namespace Identifier : 1 00:22:59.174 00:22:59.174 Commands Supported and Effects 00:22:59.174 ============================== 00:22:59.174 Admin Commands 00:22:59.174 -------------- 00:22:59.174 Get Log Page (02h): Supported 00:22:59.174 Identify (06h): Supported 00:22:59.174 Abort (08h): Supported 00:22:59.174 Set Features (09h): Supported 00:22:59.174 Get Features (0Ah): Supported 00:22:59.174 Asynchronous Event Request (0Ch): Supported 00:22:59.174 Keep Alive (18h): Supported 00:22:59.174 I/O Commands 00:22:59.174 ------------ 00:22:59.174 Flush (00h): Supported 00:22:59.174 Write (01h): Supported LBA-Change 00:22:59.174 Read (02h): Supported 00:22:59.174 Write Zeroes (08h): Supported LBA-Change 00:22:59.174 Dataset Management (09h): Supported 00:22:59.174 00:22:59.174 Error Log 00:22:59.174 ========= 00:22:59.174 Entry: 0 00:22:59.174 Error Count: 0x3 00:22:59.174 Submission Queue Id: 0x0 00:22:59.174 Command Id: 0x5 00:22:59.174 Phase Bit: 0 00:22:59.174 Status Code: 0x2 00:22:59.174 Status Code Type: 0x0 00:22:59.174 Do Not Retry: 1 00:22:59.174 Error Location: 0x28 00:22:59.174 LBA: 0x0 00:22:59.174 Namespace: 0x0 00:22:59.174 Vendor Log Page: 0x0 00:22:59.174 ----------- 00:22:59.174 Entry: 1 00:22:59.174 Error Count: 0x2 00:22:59.174 Submission Queue Id: 0x0 00:22:59.174 Command Id: 0x5 00:22:59.174 Phase Bit: 0 00:22:59.174 Status Code: 0x2 00:22:59.174 Status Code Type: 0x0 00:22:59.174 Do Not Retry: 1 00:22:59.174 Error Location: 0x28 00:22:59.174 LBA: 0x0 00:22:59.174 Namespace: 0x0 00:22:59.174 Vendor Log Page: 0x0 00:22:59.174 ----------- 00:22:59.174 Entry: 2 00:22:59.174 Error Count: 0x1 00:22:59.174 Submission Queue Id: 0x0 00:22:59.174 Command Id: 0x4 00:22:59.174 Phase Bit: 0 00:22:59.174 Status Code: 0x2 00:22:59.174 Status Code Type: 0x0 00:22:59.174 Do Not Retry: 1 00:22:59.174 Error Location: 0x28 00:22:59.174 LBA: 0x0 00:22:59.174 Namespace: 0x0 00:22:59.174 Vendor Log Page: 0x0 00:22:59.174 00:22:59.174 Number of Queues 00:22:59.174 ================ 00:22:59.174 Number of I/O Submission Queues: 128 00:22:59.174 Number of I/O Completion Queues: 128 00:22:59.174 00:22:59.174 ZNS Specific Controller Data 00:22:59.174 ============================ 00:22:59.174 Zone Append Size Limit: 0 00:22:59.174 00:22:59.174 00:22:59.174 Active Namespaces 00:22:59.174 ================= 00:22:59.174 get_feature(0x05) failed 00:22:59.174 Namespace ID:1 00:22:59.174 Command Set Identifier: NVM (00h) 00:22:59.174 Deallocate: Supported 00:22:59.174 Deallocated/Unwritten Error: Not Supported 00:22:59.174 Deallocated Read Value: Unknown 00:22:59.174 Deallocate in Write Zeroes: Not Supported 00:22:59.174 Deallocated Guard Field: 0xFFFF 00:22:59.174 Flush: Supported 00:22:59.174 Reservation: Not Supported 00:22:59.174 Namespace Sharing Capabilities: Multiple Controllers 00:22:59.174 Size (in LBAs): 3750748848 (1788GiB) 00:22:59.174 Capacity (in LBAs): 3750748848 (1788GiB) 00:22:59.174 Utilization (in LBAs): 3750748848 (1788GiB) 00:22:59.174 UUID: 1920e9c8-391a-4140-904a-dc774f67b5d0 00:22:59.174 Thin Provisioning: Not Supported 00:22:59.174 Per-NS Atomic Units: Yes 00:22:59.174 Atomic Write Unit (Normal): 8 00:22:59.174 Atomic Write Unit (PFail): 8 00:22:59.174 Preferred Write Granularity: 8 00:22:59.174 Atomic Compare & Write Unit: 8 00:22:59.174 Atomic Boundary Size (Normal): 0 00:22:59.174 Atomic Boundary Size (PFail): 0 00:22:59.174 Atomic Boundary Offset: 0 00:22:59.174 NGUID/EUI64 Never Reused: No 00:22:59.174 ANA group ID: 1 00:22:59.174 Namespace Write Protected: No 00:22:59.174 Number of LBA Formats: 1 00:22:59.174 Current LBA Format: LBA Format #00 00:22:59.174 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:59.174 00:22:59.174 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:22:59.174 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:59.174 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:22:59.174 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:59.174 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:22:59.174 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:59.174 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:59.174 rmmod nvme_tcp 00:22:59.174 rmmod nvme_fabrics 00:22:59.434 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:59.434 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:22:59.434 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:22:59.434 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:22:59.434 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:59.434 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:59.434 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:59.434 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:22:59.434 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:22:59.434 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:59.434 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:59.434 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:59.434 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:59.434 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.434 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:59.434 14:06:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.340 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:01.340 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:01.340 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:01.340 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:23:01.340 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:01.340 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:01.340 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:01.340 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:01.340 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:23:01.340 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:23:01.340 14:06:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:03.877 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:23:03.877 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:23:03.877 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:23:03.877 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:23:03.877 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:23:03.877 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:23:03.877 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:23:03.877 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:23:03.877 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:23:03.877 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:23:03.877 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:23:03.877 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:23:03.877 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:23:03.877 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:23:03.877 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:23:03.877 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:23:05.787 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:23:05.787 00:23:05.787 real 0m15.218s 00:23:05.787 user 0m3.218s 00:23:05.787 sys 0m7.373s 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.787 ************************************ 00:23:05.787 END TEST nvmf_identify_kernel_target 00:23:05.787 ************************************ 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.787 ************************************ 00:23:05.787 START TEST nvmf_auth_host 00:23:05.787 ************************************ 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:05.787 * Looking for test storage... 00:23:05.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:05.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.787 --rc genhtml_branch_coverage=1 00:23:05.787 --rc genhtml_function_coverage=1 00:23:05.787 --rc genhtml_legend=1 00:23:05.787 --rc geninfo_all_blocks=1 00:23:05.787 --rc geninfo_unexecuted_blocks=1 00:23:05.787 00:23:05.787 ' 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:05.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.787 --rc genhtml_branch_coverage=1 00:23:05.787 --rc genhtml_function_coverage=1 00:23:05.787 --rc genhtml_legend=1 00:23:05.787 --rc geninfo_all_blocks=1 00:23:05.787 --rc geninfo_unexecuted_blocks=1 00:23:05.787 00:23:05.787 ' 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:05.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.787 --rc genhtml_branch_coverage=1 00:23:05.787 --rc genhtml_function_coverage=1 00:23:05.787 --rc genhtml_legend=1 00:23:05.787 --rc geninfo_all_blocks=1 00:23:05.787 --rc geninfo_unexecuted_blocks=1 00:23:05.787 00:23:05.787 ' 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:05.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.787 --rc genhtml_branch_coverage=1 00:23:05.787 --rc genhtml_function_coverage=1 00:23:05.787 --rc genhtml_legend=1 00:23:05.787 --rc geninfo_all_blocks=1 00:23:05.787 --rc geninfo_unexecuted_blocks=1 00:23:05.787 00:23:05.787 ' 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.787 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.788 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.788 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.788 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.788 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.788 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:05.788 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.788 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:23:05.788 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:05.788 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:05.788 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:05.788 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.788 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.788 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:05.788 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:05.788 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:05.788 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:05.788 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:05.788 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:05.788 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:05.788 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:05.788 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:05.788 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:05.788 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:05.788 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:05.788 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:05.788 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:05.788 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:05.788 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.788 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:05.788 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:05.788 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:05.788 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.788 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.788 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.788 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:05.788 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:05.788 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:23:05.788 14:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:12.368 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:12.368 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:12.368 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:12.368 Found net devices under 0000:31:00.0: cvl_0_0 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:12.369 Found net devices under 0000:31:00.1: cvl_0_1 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:12.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:12.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.570 ms 00:23:12.369 00:23:12.369 --- 10.0.0.2 ping statistics --- 00:23:12.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.369 rtt min/avg/max/mdev = 0.570/0.570/0.570/0.000 ms 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:12.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:12.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:23:12.369 00:23:12.369 --- 10.0.0.1 ping statistics --- 00:23:12.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.369 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1010712 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1010712 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 1010712 ']' 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:12.369 14:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.369 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:12.369 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:23:12.369 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:12.369 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:12.369 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.369 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.369 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:12.369 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:12.369 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:12.369 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:12.369 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:12.369 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:23:12.369 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:12.369 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:12.369 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fa07561332230ea3834fe3b08829a23c 00:23:12.369 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:23:12.369 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.p6R 00:23:12.369 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fa07561332230ea3834fe3b08829a23c 0 00:23:12.369 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fa07561332230ea3834fe3b08829a23c 0 00:23:12.369 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:12.369 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:12.369 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fa07561332230ea3834fe3b08829a23c 00:23:12.369 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:23:12.369 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:12.369 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.p6R 00:23:12.369 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.p6R 00:23:12.369 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.p6R 00:23:12.369 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:12.369 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:12.369 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:12.369 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:12.369 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:23:12.369 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:23:12.369 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:12.369 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2fdf641ca0f99a0f43c7641b28e36a73320498492f843baf7e1e7258b1dcfbd1 00:23:12.369 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:23:12.370 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.1FL 00:23:12.370 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2fdf641ca0f99a0f43c7641b28e36a73320498492f843baf7e1e7258b1dcfbd1 3 00:23:12.370 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2fdf641ca0f99a0f43c7641b28e36a73320498492f843baf7e1e7258b1dcfbd1 3 00:23:12.370 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:12.370 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:12.370 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2fdf641ca0f99a0f43c7641b28e36a73320498492f843baf7e1e7258b1dcfbd1 00:23:12.370 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:23:12.370 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.1FL 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.1FL 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.1FL 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=931948b3e90f6e64db1e507876220d44dba08afaedfe7a19 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Hfp 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 931948b3e90f6e64db1e507876220d44dba08afaedfe7a19 0 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 931948b3e90f6e64db1e507876220d44dba08afaedfe7a19 0 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=931948b3e90f6e64db1e507876220d44dba08afaedfe7a19 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Hfp 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Hfp 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Hfp 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5ba0184db05082851cf3f5d624cc9c76627d3392a02586bf 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Atn 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5ba0184db05082851cf3f5d624cc9c76627d3392a02586bf 2 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5ba0184db05082851cf3f5d624cc9c76627d3392a02586bf 2 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5ba0184db05082851cf3f5d624cc9c76627d3392a02586bf 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Atn 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Atn 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Atn 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f382f4a94f0c4b0917f7943ba86eb570 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.x5r 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f382f4a94f0c4b0917f7943ba86eb570 1 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f382f4a94f0c4b0917f7943ba86eb570 1 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f382f4a94f0c4b0917f7943ba86eb570 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.x5r 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.x5r 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.x5r 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:12.630 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:12.631 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:12.631 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:23:12.631 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:12.631 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:12.631 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a0892db386fee696a6572b4a690d2c1c 00:23:12.631 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:23:12.631 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.9DW 00:23:12.631 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a0892db386fee696a6572b4a690d2c1c 1 00:23:12.631 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a0892db386fee696a6572b4a690d2c1c 1 00:23:12.631 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:12.631 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:12.631 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a0892db386fee696a6572b4a690d2c1c 00:23:12.631 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:23:12.631 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:12.631 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.9DW 00:23:12.631 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.9DW 00:23:12.631 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.9DW 00:23:12.631 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:12.631 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:12.631 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:12.631 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:12.631 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:23:12.631 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:23:12.631 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:12.631 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=452912cbe6385ff79a6cc1685cca90330da8945ee0bb6aa5 00:23:12.631 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:23:12.631 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.FVo 00:23:12.631 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 452912cbe6385ff79a6cc1685cca90330da8945ee0bb6aa5 2 00:23:12.631 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 452912cbe6385ff79a6cc1685cca90330da8945ee0bb6aa5 2 00:23:12.631 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:12.631 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:12.631 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=452912cbe6385ff79a6cc1685cca90330da8945ee0bb6aa5 00:23:12.631 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:23:12.631 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:12.890 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.FVo 00:23:12.890 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.FVo 00:23:12.890 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.FVo 00:23:12.890 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:12.891 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:12.891 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:12.891 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:12.891 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:23:12.891 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:12.891 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:12.891 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=606fed3485afedbf084001efcddfe3be 00:23:12.891 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:23:12.891 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.mlT 00:23:12.891 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 606fed3485afedbf084001efcddfe3be 0 00:23:12.891 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 606fed3485afedbf084001efcddfe3be 0 00:23:12.891 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:12.891 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:12.891 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=606fed3485afedbf084001efcddfe3be 00:23:12.891 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:23:12.891 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:12.891 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.mlT 00:23:12.891 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.mlT 00:23:12.891 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.mlT 00:23:12.891 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:12.891 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:12.891 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:12.891 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:12.891 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:23:12.891 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:23:12.891 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:12.891 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0526faa8f334dcee7695350bac60c00ee859edcc942964682848aefd56716c88 00:23:12.891 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:23:12.891 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.g5F 00:23:12.891 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0526faa8f334dcee7695350bac60c00ee859edcc942964682848aefd56716c88 3 00:23:12.891 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0526faa8f334dcee7695350bac60c00ee859edcc942964682848aefd56716c88 3 00:23:12.891 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:12.891 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:12.891 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0526faa8f334dcee7695350bac60c00ee859edcc942964682848aefd56716c88 00:23:12.891 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:23:12.891 14:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:12.891 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.g5F 00:23:12.891 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.g5F 00:23:12.891 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.g5F 00:23:12.891 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:12.891 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1010712 00:23:12.891 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 1010712 ']' 00:23:12.891 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.891 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:12.891 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.891 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:12.891 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.p6R 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.1FL ]] 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1FL 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Hfp 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Atn ]] 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Atn 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.x5r 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.9DW ]] 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9DW 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.FVo 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.mlT ]] 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.mlT 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.g5F 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:13.151 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:13.152 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.152 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.152 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:13.152 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.152 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:13.152 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:13.152 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:13.152 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:13.152 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:13.152 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:23:13.152 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:13.152 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:13.152 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:13.152 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:23:13.152 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:23:13.152 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:23:13.152 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:13.152 14:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:15.687 Waiting for block devices as requested 00:23:15.687 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:23:15.687 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:23:15.687 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:23:15.687 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:23:15.687 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:23:15.945 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:23:15.945 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:23:15.945 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:23:15.945 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:23:16.204 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:23:16.204 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:23:16.204 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:23:16.462 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:23:16.462 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:23:16.462 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:23:16.462 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:23:16.463 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:17.030 No valid GPT data, bailing 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.1 -t tcp -s 4420 00:23:17.030 00:23:17.030 Discovery Log Number of Records 2, Generation counter 2 00:23:17.030 =====Discovery Log Entry 0====== 00:23:17.030 trtype: tcp 00:23:17.030 adrfam: ipv4 00:23:17.030 subtype: current discovery subsystem 00:23:17.030 treq: not specified, sq flow control disable supported 00:23:17.030 portid: 1 00:23:17.030 trsvcid: 4420 00:23:17.030 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:17.030 traddr: 10.0.0.1 00:23:17.030 eflags: none 00:23:17.030 sectype: none 00:23:17.030 =====Discovery Log Entry 1====== 00:23:17.030 trtype: tcp 00:23:17.030 adrfam: ipv4 00:23:17.030 subtype: nvme subsystem 00:23:17.030 treq: not specified, sq flow control disable supported 00:23:17.030 portid: 1 00:23:17.030 trsvcid: 4420 00:23:17.030 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:17.030 traddr: 10.0.0.1 00:23:17.030 eflags: none 00:23:17.030 sectype: none 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMxOTQ4YjNlOTBmNmU2NGRiMWU1MDc4NzYyMjBkNDRkYmEwOGFmYWVkZmU3YTE5qee+Fw==: 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:17.030 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMxOTQ4YjNlOTBmNmU2NGRiMWU1MDc4NzYyMjBkNDRkYmEwOGFmYWVkZmU3YTE5qee+Fw==: 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: ]] 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.290 nvme0n1 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEwNzU2MTMzMjIzMGVhMzgzNGZlM2IwODgyOWEyM2M0HJMG: 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmEwNzU2MTMzMjIzMGVhMzgzNGZlM2IwODgyOWEyM2M0HJMG: 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: ]] 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.290 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.611 nvme0n1 00:23:17.611 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.611 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.611 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.611 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.611 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.611 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.611 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.611 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.611 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.611 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.611 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.611 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.611 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:17.611 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.611 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:17.611 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:17.611 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:17.611 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMxOTQ4YjNlOTBmNmU2NGRiMWU1MDc4NzYyMjBkNDRkYmEwOGFmYWVkZmU3YTE5qee+Fw==: 00:23:17.611 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: 00:23:17.611 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:17.611 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:17.611 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMxOTQ4YjNlOTBmNmU2NGRiMWU1MDc4NzYyMjBkNDRkYmEwOGFmYWVkZmU3YTE5qee+Fw==: 00:23:17.611 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: ]] 00:23:17.611 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: 00:23:17.611 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:17.611 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.611 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:17.611 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:17.611 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:17.611 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.611 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:17.611 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.611 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.612 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.612 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.612 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:17.612 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:17.612 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:17.612 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.612 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.612 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:17.612 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.612 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:17.612 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:17.612 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:17.612 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:17.612 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.612 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.612 nvme0n1 00:23:17.612 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.612 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.612 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.612 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.612 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.612 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.612 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.612 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.612 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.612 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjM4MmY0YTk0ZjBjNGIwOTE3Zjc5NDNiYTg2ZWI1NzACsaWW: 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjM4MmY0YTk0ZjBjNGIwOTE3Zjc5NDNiYTg2ZWI1NzACsaWW: 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: ]] 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.921 14:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.921 nvme0n1 00:23:17.921 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.921 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.921 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.921 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.921 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.921 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.921 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.921 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.921 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.921 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.921 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.921 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.921 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:17.921 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.921 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:17.921 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:17.921 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:17.921 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDUyOTEyY2JlNjM4NWZmNzlhNmNjMTY4NWNjYTkwMzMwZGE4OTQ1ZWUwYmI2YWE1uPw/tg==: 00:23:17.921 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: 00:23:17.921 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:17.921 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:17.921 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDUyOTEyY2JlNjM4NWZmNzlhNmNjMTY4NWNjYTkwMzMwZGE4OTQ1ZWUwYmI2YWE1uPw/tg==: 00:23:17.921 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: ]] 00:23:17.921 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: 00:23:17.921 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:17.921 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.921 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:17.921 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:17.921 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:17.921 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.921 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:17.921 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.921 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.921 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.921 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.921 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:17.922 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:17.922 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:17.922 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.922 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.922 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:17.922 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.922 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:17.922 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:17.922 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:17.922 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:17.922 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.922 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.922 nvme0n1 00:23:17.922 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.922 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.922 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.922 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.922 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.922 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDUyNmZhYThmMzM0ZGNlZTc2OTUzNTBiYWM2MGMwMGVlODU5ZWRjYzk0Mjk2NDY4Mjg0OGFlZmQ1NjcxNmM4OLW8SnY=: 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDUyNmZhYThmMzM0ZGNlZTc2OTUzNTBiYWM2MGMwMGVlODU5ZWRjYzk0Mjk2NDY4Mjg0OGFlZmQ1NjcxNmM4OLW8SnY=: 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.219 nvme0n1 00:23:18.219 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.220 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.220 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.220 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.220 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.220 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.220 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.220 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.220 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.220 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.220 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.220 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:18.220 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.220 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:18.220 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.220 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:18.220 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:18.220 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:18.220 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEwNzU2MTMzMjIzMGVhMzgzNGZlM2IwODgyOWEyM2M0HJMG: 00:23:18.220 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: 00:23:18.220 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:18.220 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:18.479 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmEwNzU2MTMzMjIzMGVhMzgzNGZlM2IwODgyOWEyM2M0HJMG: 00:23:18.479 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: ]] 00:23:18.479 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: 00:23:18.479 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:18.479 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.479 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:18.479 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:18.479 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:18.479 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.479 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:18.479 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.479 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.479 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.479 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.479 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:18.479 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:18.479 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:18.479 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.479 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.479 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:18.479 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.480 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:18.480 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:18.480 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:18.480 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:18.480 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.480 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.739 nvme0n1 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMxOTQ4YjNlOTBmNmU2NGRiMWU1MDc4NzYyMjBkNDRkYmEwOGFmYWVkZmU3YTE5qee+Fw==: 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMxOTQ4YjNlOTBmNmU2NGRiMWU1MDc4NzYyMjBkNDRkYmEwOGFmYWVkZmU3YTE5qee+Fw==: 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: ]] 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.739 nvme0n1 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.739 14:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.739 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.739 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.740 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.740 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.740 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.740 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.740 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:18.740 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.740 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:18.740 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:18.740 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:18.740 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjM4MmY0YTk0ZjBjNGIwOTE3Zjc5NDNiYTg2ZWI1NzACsaWW: 00:23:18.740 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: 00:23:18.740 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:18.740 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:18.740 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjM4MmY0YTk0ZjBjNGIwOTE3Zjc5NDNiYTg2ZWI1NzACsaWW: 00:23:18.740 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: ]] 00:23:18.740 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: 00:23:18.740 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.999 nvme0n1 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDUyOTEyY2JlNjM4NWZmNzlhNmNjMTY4NWNjYTkwMzMwZGE4OTQ1ZWUwYmI2YWE1uPw/tg==: 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDUyOTEyY2JlNjM4NWZmNzlhNmNjMTY4NWNjYTkwMzMwZGE4OTQ1ZWUwYmI2YWE1uPw/tg==: 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: ]] 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:18.999 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.000 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:19.000 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:19.000 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:19.000 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:19.000 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.000 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.259 nvme0n1 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDUyNmZhYThmMzM0ZGNlZTc2OTUzNTBiYWM2MGMwMGVlODU5ZWRjYzk0Mjk2NDY4Mjg0OGFlZmQ1NjcxNmM4OLW8SnY=: 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDUyNmZhYThmMzM0ZGNlZTc2OTUzNTBiYWM2MGMwMGVlODU5ZWRjYzk0Mjk2NDY4Mjg0OGFlZmQ1NjcxNmM4OLW8SnY=: 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.259 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.518 nvme0n1 00:23:19.518 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.518 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.518 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.518 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.518 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.518 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.518 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.518 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.518 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.518 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.518 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.518 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:19.518 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.518 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:19.518 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.518 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:19.518 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:19.518 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:19.518 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEwNzU2MTMzMjIzMGVhMzgzNGZlM2IwODgyOWEyM2M0HJMG: 00:23:19.518 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: 00:23:19.518 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:19.518 14:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:19.777 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmEwNzU2MTMzMjIzMGVhMzgzNGZlM2IwODgyOWEyM2M0HJMG: 00:23:19.777 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: ]] 00:23:19.777 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: 00:23:19.777 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:19.777 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.777 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:19.777 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:19.777 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:19.777 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.777 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:19.777 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.777 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.777 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.777 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.777 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:19.777 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:19.777 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:19.777 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.777 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.777 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:19.777 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.777 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:19.777 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:19.777 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:19.777 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:19.777 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.777 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.036 nvme0n1 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMxOTQ4YjNlOTBmNmU2NGRiMWU1MDc4NzYyMjBkNDRkYmEwOGFmYWVkZmU3YTE5qee+Fw==: 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMxOTQ4YjNlOTBmNmU2NGRiMWU1MDc4NzYyMjBkNDRkYmEwOGFmYWVkZmU3YTE5qee+Fw==: 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: ]] 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.036 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.295 nvme0n1 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjM4MmY0YTk0ZjBjNGIwOTE3Zjc5NDNiYTg2ZWI1NzACsaWW: 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjM4MmY0YTk0ZjBjNGIwOTE3Zjc5NDNiYTg2ZWI1NzACsaWW: 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: ]] 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.295 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.561 nvme0n1 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDUyOTEyY2JlNjM4NWZmNzlhNmNjMTY4NWNjYTkwMzMwZGE4OTQ1ZWUwYmI2YWE1uPw/tg==: 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDUyOTEyY2JlNjM4NWZmNzlhNmNjMTY4NWNjYTkwMzMwZGE4OTQ1ZWUwYmI2YWE1uPw/tg==: 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: ]] 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.561 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.821 nvme0n1 00:23:20.821 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.821 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.821 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.821 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.821 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.821 14:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDUyNmZhYThmMzM0ZGNlZTc2OTUzNTBiYWM2MGMwMGVlODU5ZWRjYzk0Mjk2NDY4Mjg0OGFlZmQ1NjcxNmM4OLW8SnY=: 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDUyNmZhYThmMzM0ZGNlZTc2OTUzNTBiYWM2MGMwMGVlODU5ZWRjYzk0Mjk2NDY4Mjg0OGFlZmQ1NjcxNmM4OLW8SnY=: 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.821 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.080 nvme0n1 00:23:21.080 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.080 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.080 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.080 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.080 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.080 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.080 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.080 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.080 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.080 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.080 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.080 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:21.080 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.080 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:21.080 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.080 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:21.080 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:21.080 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:21.080 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEwNzU2MTMzMjIzMGVhMzgzNGZlM2IwODgyOWEyM2M0HJMG: 00:23:21.080 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: 00:23:21.080 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:21.080 14:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:22.457 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmEwNzU2MTMzMjIzMGVhMzgzNGZlM2IwODgyOWEyM2M0HJMG: 00:23:22.457 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: ]] 00:23:22.457 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: 00:23:22.457 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:22.457 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.457 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:22.457 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:22.457 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:22.457 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.457 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:22.457 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.457 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.457 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.457 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.457 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:22.457 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:22.457 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:22.457 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.457 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.457 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:22.457 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.457 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:22.457 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:22.457 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:22.457 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:22.457 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.457 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.716 nvme0n1 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMxOTQ4YjNlOTBmNmU2NGRiMWU1MDc4NzYyMjBkNDRkYmEwOGFmYWVkZmU3YTE5qee+Fw==: 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMxOTQ4YjNlOTBmNmU2NGRiMWU1MDc4NzYyMjBkNDRkYmEwOGFmYWVkZmU3YTE5qee+Fw==: 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: ]] 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.716 14:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.976 nvme0n1 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjM4MmY0YTk0ZjBjNGIwOTE3Zjc5NDNiYTg2ZWI1NzACsaWW: 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjM4MmY0YTk0ZjBjNGIwOTE3Zjc5NDNiYTg2ZWI1NzACsaWW: 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: ]] 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.976 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.544 nvme0n1 00:23:23.544 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.544 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.544 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.544 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.544 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.544 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.544 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.544 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.544 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.544 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.544 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.544 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.544 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:23.544 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.544 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:23.544 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:23.544 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:23.544 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDUyOTEyY2JlNjM4NWZmNzlhNmNjMTY4NWNjYTkwMzMwZGE4OTQ1ZWUwYmI2YWE1uPw/tg==: 00:23:23.544 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: 00:23:23.544 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:23.544 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:23.544 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDUyOTEyY2JlNjM4NWZmNzlhNmNjMTY4NWNjYTkwMzMwZGE4OTQ1ZWUwYmI2YWE1uPw/tg==: 00:23:23.544 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: ]] 00:23:23.544 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: 00:23:23.544 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:23.544 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.544 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:23.544 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:23.544 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:23.544 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.544 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:23.544 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.544 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.544 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.544 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.545 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:23.545 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:23.545 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:23.545 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.545 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.545 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:23.545 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.545 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:23.545 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:23.545 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:23.545 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:23.545 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.545 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.804 nvme0n1 00:23:23.804 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.804 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.804 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.804 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.804 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.804 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.804 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.804 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.804 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.804 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.804 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.804 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.804 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:23.804 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.804 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:23.804 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:23.804 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:23.804 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDUyNmZhYThmMzM0ZGNlZTc2OTUzNTBiYWM2MGMwMGVlODU5ZWRjYzk0Mjk2NDY4Mjg0OGFlZmQ1NjcxNmM4OLW8SnY=: 00:23:23.804 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:23.804 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:23.804 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:23.804 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDUyNmZhYThmMzM0ZGNlZTc2OTUzNTBiYWM2MGMwMGVlODU5ZWRjYzk0Mjk2NDY4Mjg0OGFlZmQ1NjcxNmM4OLW8SnY=: 00:23:23.804 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:23.804 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:23.804 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.804 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:23.804 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:23.804 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:23.804 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.804 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:23.805 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.805 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.805 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.805 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.805 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:23.805 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:23.805 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:23.805 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.805 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.805 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:23.805 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.805 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:23.805 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:23.805 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:23.805 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:23.805 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.805 14:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.062 nvme0n1 00:23:24.062 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.062 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.062 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.062 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.062 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.062 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.062 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.062 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.062 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.062 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEwNzU2MTMzMjIzMGVhMzgzNGZlM2IwODgyOWEyM2M0HJMG: 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmEwNzU2MTMzMjIzMGVhMzgzNGZlM2IwODgyOWEyM2M0HJMG: 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: ]] 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.319 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.886 nvme0n1 00:23:24.886 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.886 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.886 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.886 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.886 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMxOTQ4YjNlOTBmNmU2NGRiMWU1MDc4NzYyMjBkNDRkYmEwOGFmYWVkZmU3YTE5qee+Fw==: 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMxOTQ4YjNlOTBmNmU2NGRiMWU1MDc4NzYyMjBkNDRkYmEwOGFmYWVkZmU3YTE5qee+Fw==: 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: ]] 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.887 14:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.455 nvme0n1 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjM4MmY0YTk0ZjBjNGIwOTE3Zjc5NDNiYTg2ZWI1NzACsaWW: 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjM4MmY0YTk0ZjBjNGIwOTE3Zjc5NDNiYTg2ZWI1NzACsaWW: 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: ]] 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.455 14:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.025 nvme0n1 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDUyOTEyY2JlNjM4NWZmNzlhNmNjMTY4NWNjYTkwMzMwZGE4OTQ1ZWUwYmI2YWE1uPw/tg==: 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDUyOTEyY2JlNjM4NWZmNzlhNmNjMTY4NWNjYTkwMzMwZGE4OTQ1ZWUwYmI2YWE1uPw/tg==: 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: ]] 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.025 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.594 nvme0n1 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDUyNmZhYThmMzM0ZGNlZTc2OTUzNTBiYWM2MGMwMGVlODU5ZWRjYzk0Mjk2NDY4Mjg0OGFlZmQ1NjcxNmM4OLW8SnY=: 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDUyNmZhYThmMzM0ZGNlZTc2OTUzNTBiYWM2MGMwMGVlODU5ZWRjYzk0Mjk2NDY4Mjg0OGFlZmQ1NjcxNmM4OLW8SnY=: 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.594 14:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.163 nvme0n1 00:23:27.163 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.163 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.163 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.163 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.163 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.163 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.163 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.163 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.163 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.163 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.163 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.163 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:27.163 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:27.163 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.163 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:27.163 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.163 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:27.163 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:27.163 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:27.163 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEwNzU2MTMzMjIzMGVhMzgzNGZlM2IwODgyOWEyM2M0HJMG: 00:23:27.163 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: 00:23:27.163 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:27.163 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:27.163 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmEwNzU2MTMzMjIzMGVhMzgzNGZlM2IwODgyOWEyM2M0HJMG: 00:23:27.163 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: ]] 00:23:27.163 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: 00:23:27.163 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:23:27.163 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.163 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:27.163 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:27.163 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:27.163 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.163 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:27.163 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.163 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.422 nvme0n1 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMxOTQ4YjNlOTBmNmU2NGRiMWU1MDc4NzYyMjBkNDRkYmEwOGFmYWVkZmU3YTE5qee+Fw==: 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMxOTQ4YjNlOTBmNmU2NGRiMWU1MDc4NzYyMjBkNDRkYmEwOGFmYWVkZmU3YTE5qee+Fw==: 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: ]] 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:27.422 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:27.423 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:27.423 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:27.423 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.423 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.682 nvme0n1 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjM4MmY0YTk0ZjBjNGIwOTE3Zjc5NDNiYTg2ZWI1NzACsaWW: 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjM4MmY0YTk0ZjBjNGIwOTE3Zjc5NDNiYTg2ZWI1NzACsaWW: 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: ]] 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.682 nvme0n1 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDUyOTEyY2JlNjM4NWZmNzlhNmNjMTY4NWNjYTkwMzMwZGE4OTQ1ZWUwYmI2YWE1uPw/tg==: 00:23:27.682 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: 00:23:27.683 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:27.683 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:27.683 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDUyOTEyY2JlNjM4NWZmNzlhNmNjMTY4NWNjYTkwMzMwZGE4OTQ1ZWUwYmI2YWE1uPw/tg==: 00:23:27.683 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: ]] 00:23:27.683 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: 00:23:27.683 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:23:27.683 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.683 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:27.683 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:27.683 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:27.683 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.683 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:27.683 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.683 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.683 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:27.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:27.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:27.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:27.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:27.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:27.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:27.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:27.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.943 14:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.943 nvme0n1 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDUyNmZhYThmMzM0ZGNlZTc2OTUzNTBiYWM2MGMwMGVlODU5ZWRjYzk0Mjk2NDY4Mjg0OGFlZmQ1NjcxNmM4OLW8SnY=: 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDUyNmZhYThmMzM0ZGNlZTc2OTUzNTBiYWM2MGMwMGVlODU5ZWRjYzk0Mjk2NDY4Mjg0OGFlZmQ1NjcxNmM4OLW8SnY=: 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.943 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.203 nvme0n1 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEwNzU2MTMzMjIzMGVhMzgzNGZlM2IwODgyOWEyM2M0HJMG: 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmEwNzU2MTMzMjIzMGVhMzgzNGZlM2IwODgyOWEyM2M0HJMG: 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: ]] 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.203 nvme0n1 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMxOTQ4YjNlOTBmNmU2NGRiMWU1MDc4NzYyMjBkNDRkYmEwOGFmYWVkZmU3YTE5qee+Fw==: 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMxOTQ4YjNlOTBmNmU2NGRiMWU1MDc4NzYyMjBkNDRkYmEwOGFmYWVkZmU3YTE5qee+Fw==: 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: ]] 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.203 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.462 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.462 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.462 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:28.462 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:28.462 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:28.462 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.462 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.462 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:28.462 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.462 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:28.462 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:28.462 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.463 nvme0n1 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjM4MmY0YTk0ZjBjNGIwOTE3Zjc5NDNiYTg2ZWI1NzACsaWW: 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjM4MmY0YTk0ZjBjNGIwOTE3Zjc5NDNiYTg2ZWI1NzACsaWW: 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: ]] 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.463 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.723 nvme0n1 00:23:28.723 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.723 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.723 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.723 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.723 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.723 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.723 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.723 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.723 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.723 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.723 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.723 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.723 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:28.723 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.723 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:28.723 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:28.723 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:28.723 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDUyOTEyY2JlNjM4NWZmNzlhNmNjMTY4NWNjYTkwMzMwZGE4OTQ1ZWUwYmI2YWE1uPw/tg==: 00:23:28.723 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: 00:23:28.723 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:28.724 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:28.724 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDUyOTEyY2JlNjM4NWZmNzlhNmNjMTY4NWNjYTkwMzMwZGE4OTQ1ZWUwYmI2YWE1uPw/tg==: 00:23:28.724 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: ]] 00:23:28.724 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: 00:23:28.724 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:23:28.724 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.724 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:28.724 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:28.724 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:28.724 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.724 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:28.724 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.724 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.724 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.724 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.724 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:28.724 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:28.724 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:28.724 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.724 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.724 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:28.724 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.724 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:28.724 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:28.724 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:28.724 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:28.724 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.724 14:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.983 nvme0n1 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDUyNmZhYThmMzM0ZGNlZTc2OTUzNTBiYWM2MGMwMGVlODU5ZWRjYzk0Mjk2NDY4Mjg0OGFlZmQ1NjcxNmM4OLW8SnY=: 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDUyNmZhYThmMzM0ZGNlZTc2OTUzNTBiYWM2MGMwMGVlODU5ZWRjYzk0Mjk2NDY4Mjg0OGFlZmQ1NjcxNmM4OLW8SnY=: 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.983 nvme0n1 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEwNzU2MTMzMjIzMGVhMzgzNGZlM2IwODgyOWEyM2M0HJMG: 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:28.983 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:28.984 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmEwNzU2MTMzMjIzMGVhMzgzNGZlM2IwODgyOWEyM2M0HJMG: 00:23:28.984 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: ]] 00:23:28.984 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: 00:23:28.984 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:23:28.984 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.984 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:28.984 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:28.984 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:28.984 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.984 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:28.984 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.984 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.984 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.984 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.984 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:28.984 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:28.984 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:28.984 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.984 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.984 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:28.984 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.984 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:28.984 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:28.984 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:28.984 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:28.984 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.984 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.245 nvme0n1 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMxOTQ4YjNlOTBmNmU2NGRiMWU1MDc4NzYyMjBkNDRkYmEwOGFmYWVkZmU3YTE5qee+Fw==: 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMxOTQ4YjNlOTBmNmU2NGRiMWU1MDc4NzYyMjBkNDRkYmEwOGFmYWVkZmU3YTE5qee+Fw==: 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: ]] 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.245 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.506 nvme0n1 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjM4MmY0YTk0ZjBjNGIwOTE3Zjc5NDNiYTg2ZWI1NzACsaWW: 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjM4MmY0YTk0ZjBjNGIwOTE3Zjc5NDNiYTg2ZWI1NzACsaWW: 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: ]] 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.506 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.766 nvme0n1 00:23:29.766 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.766 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.766 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.766 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.766 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.766 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.766 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.766 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.766 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.766 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.766 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.766 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.766 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:29.766 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.766 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:29.766 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:29.766 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:29.766 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDUyOTEyY2JlNjM4NWZmNzlhNmNjMTY4NWNjYTkwMzMwZGE4OTQ1ZWUwYmI2YWE1uPw/tg==: 00:23:29.766 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: 00:23:29.766 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:29.766 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:29.766 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDUyOTEyY2JlNjM4NWZmNzlhNmNjMTY4NWNjYTkwMzMwZGE4OTQ1ZWUwYmI2YWE1uPw/tg==: 00:23:29.766 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: ]] 00:23:29.766 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: 00:23:29.766 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:23:29.766 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.766 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:29.766 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:29.766 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:29.766 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.766 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:29.766 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.766 14:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.766 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.766 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.766 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:29.766 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:29.766 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:29.766 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.766 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.766 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:29.766 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.766 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:29.766 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:29.766 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:29.766 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:29.766 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.766 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.026 nvme0n1 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDUyNmZhYThmMzM0ZGNlZTc2OTUzNTBiYWM2MGMwMGVlODU5ZWRjYzk0Mjk2NDY4Mjg0OGFlZmQ1NjcxNmM4OLW8SnY=: 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDUyNmZhYThmMzM0ZGNlZTc2OTUzNTBiYWM2MGMwMGVlODU5ZWRjYzk0Mjk2NDY4Mjg0OGFlZmQ1NjcxNmM4OLW8SnY=: 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.026 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.286 nvme0n1 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEwNzU2MTMzMjIzMGVhMzgzNGZlM2IwODgyOWEyM2M0HJMG: 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmEwNzU2MTMzMjIzMGVhMzgzNGZlM2IwODgyOWEyM2M0HJMG: 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: ]] 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.286 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.855 nvme0n1 00:23:30.855 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.855 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.855 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.855 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.855 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.855 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.855 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.855 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.855 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.855 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.855 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.855 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.855 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:30.856 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.856 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:30.856 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:30.856 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:30.856 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMxOTQ4YjNlOTBmNmU2NGRiMWU1MDc4NzYyMjBkNDRkYmEwOGFmYWVkZmU3YTE5qee+Fw==: 00:23:30.856 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: 00:23:30.856 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:30.856 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:30.856 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMxOTQ4YjNlOTBmNmU2NGRiMWU1MDc4NzYyMjBkNDRkYmEwOGFmYWVkZmU3YTE5qee+Fw==: 00:23:30.856 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: ]] 00:23:30.856 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: 00:23:30.856 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:23:30.856 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.856 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:30.856 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:30.856 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:30.856 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.856 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:30.856 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.856 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.856 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.856 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.856 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:30.856 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:30.856 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:30.856 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.856 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.856 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:30.856 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.856 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:30.856 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:30.856 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:30.856 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:30.856 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.856 14:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.115 nvme0n1 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjM4MmY0YTk0ZjBjNGIwOTE3Zjc5NDNiYTg2ZWI1NzACsaWW: 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjM4MmY0YTk0ZjBjNGIwOTE3Zjc5NDNiYTg2ZWI1NzACsaWW: 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: ]] 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.115 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.373 nvme0n1 00:23:31.373 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.373 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.373 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.373 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.373 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.373 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDUyOTEyY2JlNjM4NWZmNzlhNmNjMTY4NWNjYTkwMzMwZGE4OTQ1ZWUwYmI2YWE1uPw/tg==: 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDUyOTEyY2JlNjM4NWZmNzlhNmNjMTY4NWNjYTkwMzMwZGE4OTQ1ZWUwYmI2YWE1uPw/tg==: 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: ]] 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.632 14:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.891 nvme0n1 00:23:31.891 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDUyNmZhYThmMzM0ZGNlZTc2OTUzNTBiYWM2MGMwMGVlODU5ZWRjYzk0Mjk2NDY4Mjg0OGFlZmQ1NjcxNmM4OLW8SnY=: 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDUyNmZhYThmMzM0ZGNlZTc2OTUzNTBiYWM2MGMwMGVlODU5ZWRjYzk0Mjk2NDY4Mjg0OGFlZmQ1NjcxNmM4OLW8SnY=: 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.892 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.151 nvme0n1 00:23:32.151 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.151 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.151 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.151 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.151 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEwNzU2MTMzMjIzMGVhMzgzNGZlM2IwODgyOWEyM2M0HJMG: 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmEwNzU2MTMzMjIzMGVhMzgzNGZlM2IwODgyOWEyM2M0HJMG: 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: ]] 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.411 14:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.980 nvme0n1 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMxOTQ4YjNlOTBmNmU2NGRiMWU1MDc4NzYyMjBkNDRkYmEwOGFmYWVkZmU3YTE5qee+Fw==: 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMxOTQ4YjNlOTBmNmU2NGRiMWU1MDc4NzYyMjBkNDRkYmEwOGFmYWVkZmU3YTE5qee+Fw==: 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: ]] 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.980 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.550 nvme0n1 00:23:33.550 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.550 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.550 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.550 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.550 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.550 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.550 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.550 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.550 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.550 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.550 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.550 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.550 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:33.550 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.550 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:33.550 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:33.550 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:33.550 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjM4MmY0YTk0ZjBjNGIwOTE3Zjc5NDNiYTg2ZWI1NzACsaWW: 00:23:33.550 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: 00:23:33.550 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:33.550 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:33.550 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjM4MmY0YTk0ZjBjNGIwOTE3Zjc5NDNiYTg2ZWI1NzACsaWW: 00:23:33.550 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: ]] 00:23:33.550 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: 00:23:33.550 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:33.550 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.550 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:33.551 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:33.551 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:33.551 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.551 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:33.551 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.551 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.551 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.551 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.551 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:33.551 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:33.551 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:33.551 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.551 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.551 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:33.551 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.551 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:33.551 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:33.551 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:33.551 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:33.551 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.551 14:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.119 nvme0n1 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDUyOTEyY2JlNjM4NWZmNzlhNmNjMTY4NWNjYTkwMzMwZGE4OTQ1ZWUwYmI2YWE1uPw/tg==: 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDUyOTEyY2JlNjM4NWZmNzlhNmNjMTY4NWNjYTkwMzMwZGE4OTQ1ZWUwYmI2YWE1uPw/tg==: 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: ]] 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.119 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.688 nvme0n1 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDUyNmZhYThmMzM0ZGNlZTc2OTUzNTBiYWM2MGMwMGVlODU5ZWRjYzk0Mjk2NDY4Mjg0OGFlZmQ1NjcxNmM4OLW8SnY=: 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDUyNmZhYThmMzM0ZGNlZTc2OTUzNTBiYWM2MGMwMGVlODU5ZWRjYzk0Mjk2NDY4Mjg0OGFlZmQ1NjcxNmM4OLW8SnY=: 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.688 14:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.256 nvme0n1 00:23:35.256 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.256 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.256 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.256 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.256 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.256 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEwNzU2MTMzMjIzMGVhMzgzNGZlM2IwODgyOWEyM2M0HJMG: 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmEwNzU2MTMzMjIzMGVhMzgzNGZlM2IwODgyOWEyM2M0HJMG: 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: ]] 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.516 nvme0n1 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMxOTQ4YjNlOTBmNmU2NGRiMWU1MDc4NzYyMjBkNDRkYmEwOGFmYWVkZmU3YTE5qee+Fw==: 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMxOTQ4YjNlOTBmNmU2NGRiMWU1MDc4NzYyMjBkNDRkYmEwOGFmYWVkZmU3YTE5qee+Fw==: 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: ]] 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:35.516 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.517 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:35.517 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:35.517 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:35.517 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:35.517 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.517 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.776 nvme0n1 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjM4MmY0YTk0ZjBjNGIwOTE3Zjc5NDNiYTg2ZWI1NzACsaWW: 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjM4MmY0YTk0ZjBjNGIwOTE3Zjc5NDNiYTg2ZWI1NzACsaWW: 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: ]] 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.776 14:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.776 nvme0n1 00:23:35.776 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.776 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.776 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.776 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.776 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.776 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.776 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.777 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.777 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.777 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDUyOTEyY2JlNjM4NWZmNzlhNmNjMTY4NWNjYTkwMzMwZGE4OTQ1ZWUwYmI2YWE1uPw/tg==: 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDUyOTEyY2JlNjM4NWZmNzlhNmNjMTY4NWNjYTkwMzMwZGE4OTQ1ZWUwYmI2YWE1uPw/tg==: 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: ]] 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.037 nvme0n1 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDUyNmZhYThmMzM0ZGNlZTc2OTUzNTBiYWM2MGMwMGVlODU5ZWRjYzk0Mjk2NDY4Mjg0OGFlZmQ1NjcxNmM4OLW8SnY=: 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDUyNmZhYThmMzM0ZGNlZTc2OTUzNTBiYWM2MGMwMGVlODU5ZWRjYzk0Mjk2NDY4Mjg0OGFlZmQ1NjcxNmM4OLW8SnY=: 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.037 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.297 nvme0n1 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEwNzU2MTMzMjIzMGVhMzgzNGZlM2IwODgyOWEyM2M0HJMG: 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmEwNzU2MTMzMjIzMGVhMzgzNGZlM2IwODgyOWEyM2M0HJMG: 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: ]] 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.297 nvme0n1 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.297 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMxOTQ4YjNlOTBmNmU2NGRiMWU1MDc4NzYyMjBkNDRkYmEwOGFmYWVkZmU3YTE5qee+Fw==: 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMxOTQ4YjNlOTBmNmU2NGRiMWU1MDc4NzYyMjBkNDRkYmEwOGFmYWVkZmU3YTE5qee+Fw==: 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: ]] 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.557 nvme0n1 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.557 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjM4MmY0YTk0ZjBjNGIwOTE3Zjc5NDNiYTg2ZWI1NzACsaWW: 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjM4MmY0YTk0ZjBjNGIwOTE3Zjc5NDNiYTg2ZWI1NzACsaWW: 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: ]] 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.558 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.818 nvme0n1 00:23:36.818 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.818 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.818 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.818 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.818 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.818 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.818 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.818 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.818 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.818 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.818 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.818 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.818 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:36.818 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.818 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:36.818 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:36.818 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:36.818 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDUyOTEyY2JlNjM4NWZmNzlhNmNjMTY4NWNjYTkwMzMwZGE4OTQ1ZWUwYmI2YWE1uPw/tg==: 00:23:36.818 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: 00:23:36.818 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:36.818 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:36.818 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDUyOTEyY2JlNjM4NWZmNzlhNmNjMTY4NWNjYTkwMzMwZGE4OTQ1ZWUwYmI2YWE1uPw/tg==: 00:23:36.818 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: ]] 00:23:36.818 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: 00:23:36.818 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:23:36.818 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.818 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:36.818 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:36.818 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:36.818 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.818 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:36.818 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.818 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.818 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.818 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.818 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:36.818 14:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:36.818 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:36.818 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.818 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.818 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:36.818 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.818 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:36.818 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:36.818 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:36.818 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:36.818 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.818 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.077 nvme0n1 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDUyNmZhYThmMzM0ZGNlZTc2OTUzNTBiYWM2MGMwMGVlODU5ZWRjYzk0Mjk2NDY4Mjg0OGFlZmQ1NjcxNmM4OLW8SnY=: 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDUyNmZhYThmMzM0ZGNlZTc2OTUzNTBiYWM2MGMwMGVlODU5ZWRjYzk0Mjk2NDY4Mjg0OGFlZmQ1NjcxNmM4OLW8SnY=: 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.077 nvme0n1 00:23:37.077 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.078 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.078 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.078 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.078 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.078 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.078 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.078 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.078 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.078 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.336 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEwNzU2MTMzMjIzMGVhMzgzNGZlM2IwODgyOWEyM2M0HJMG: 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmEwNzU2MTMzMjIzMGVhMzgzNGZlM2IwODgyOWEyM2M0HJMG: 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: ]] 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.337 nvme0n1 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.337 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.596 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.596 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.596 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:37.596 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.596 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:37.596 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:37.596 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:37.596 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMxOTQ4YjNlOTBmNmU2NGRiMWU1MDc4NzYyMjBkNDRkYmEwOGFmYWVkZmU3YTE5qee+Fw==: 00:23:37.596 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: 00:23:37.596 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:37.596 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:37.596 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMxOTQ4YjNlOTBmNmU2NGRiMWU1MDc4NzYyMjBkNDRkYmEwOGFmYWVkZmU3YTE5qee+Fw==: 00:23:37.596 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: ]] 00:23:37.596 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: 00:23:37.596 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:23:37.596 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.596 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:37.596 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:37.596 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:37.596 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.596 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:37.596 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.596 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.596 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.596 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.596 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:37.596 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:37.596 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:37.596 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.596 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.596 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:37.596 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.596 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:37.596 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:37.596 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:37.597 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:37.597 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.597 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.597 nvme0n1 00:23:37.597 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.597 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.597 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.597 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.597 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.597 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.597 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.597 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.597 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.597 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.597 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.597 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.597 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:37.597 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.597 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:37.597 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:37.597 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:37.597 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjM4MmY0YTk0ZjBjNGIwOTE3Zjc5NDNiYTg2ZWI1NzACsaWW: 00:23:37.597 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: 00:23:37.597 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:37.597 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:37.597 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjM4MmY0YTk0ZjBjNGIwOTE3Zjc5NDNiYTg2ZWI1NzACsaWW: 00:23:37.597 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: ]] 00:23:37.597 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: 00:23:37.597 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:37.597 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.597 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:37.597 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:37.597 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:37.597 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.597 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:37.597 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.597 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.857 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.857 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.857 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:37.857 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:37.857 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:37.857 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.857 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.857 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:37.857 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.857 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:37.857 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:37.857 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:37.857 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:37.857 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.857 14:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.857 nvme0n1 00:23:37.857 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.857 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.857 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.857 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.857 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.857 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.857 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.857 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.857 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.857 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.857 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.857 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.857 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:37.857 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.857 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:37.857 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:37.857 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:37.857 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDUyOTEyY2JlNjM4NWZmNzlhNmNjMTY4NWNjYTkwMzMwZGE4OTQ1ZWUwYmI2YWE1uPw/tg==: 00:23:37.857 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: 00:23:37.857 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:37.857 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:37.857 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDUyOTEyY2JlNjM4NWZmNzlhNmNjMTY4NWNjYTkwMzMwZGE4OTQ1ZWUwYmI2YWE1uPw/tg==: 00:23:37.857 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: ]] 00:23:37.857 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: 00:23:37.857 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:37.857 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.857 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:37.857 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:37.857 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:37.857 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.857 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:37.857 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.857 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.118 nvme0n1 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDUyNmZhYThmMzM0ZGNlZTc2OTUzNTBiYWM2MGMwMGVlODU5ZWRjYzk0Mjk2NDY4Mjg0OGFlZmQ1NjcxNmM4OLW8SnY=: 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDUyNmZhYThmMzM0ZGNlZTc2OTUzNTBiYWM2MGMwMGVlODU5ZWRjYzk0Mjk2NDY4Mjg0OGFlZmQ1NjcxNmM4OLW8SnY=: 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:38.118 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.378 nvme0n1 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEwNzU2MTMzMjIzMGVhMzgzNGZlM2IwODgyOWEyM2M0HJMG: 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmEwNzU2MTMzMjIzMGVhMzgzNGZlM2IwODgyOWEyM2M0HJMG: 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: ]] 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.378 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.948 nvme0n1 00:23:38.948 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.948 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.948 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.948 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.948 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.948 14:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMxOTQ4YjNlOTBmNmU2NGRiMWU1MDc4NzYyMjBkNDRkYmEwOGFmYWVkZmU3YTE5qee+Fw==: 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMxOTQ4YjNlOTBmNmU2NGRiMWU1MDc4NzYyMjBkNDRkYmEwOGFmYWVkZmU3YTE5qee+Fw==: 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: ]] 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.948 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.207 nvme0n1 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjM4MmY0YTk0ZjBjNGIwOTE3Zjc5NDNiYTg2ZWI1NzACsaWW: 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjM4MmY0YTk0ZjBjNGIwOTE3Zjc5NDNiYTg2ZWI1NzACsaWW: 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: ]] 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.207 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.775 nvme0n1 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDUyOTEyY2JlNjM4NWZmNzlhNmNjMTY4NWNjYTkwMzMwZGE4OTQ1ZWUwYmI2YWE1uPw/tg==: 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDUyOTEyY2JlNjM4NWZmNzlhNmNjMTY4NWNjYTkwMzMwZGE4OTQ1ZWUwYmI2YWE1uPw/tg==: 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: ]] 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.775 14:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.035 nvme0n1 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDUyNmZhYThmMzM0ZGNlZTc2OTUzNTBiYWM2MGMwMGVlODU5ZWRjYzk0Mjk2NDY4Mjg0OGFlZmQ1NjcxNmM4OLW8SnY=: 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDUyNmZhYThmMzM0ZGNlZTc2OTUzNTBiYWM2MGMwMGVlODU5ZWRjYzk0Mjk2NDY4Mjg0OGFlZmQ1NjcxNmM4OLW8SnY=: 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.035 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.294 nvme0n1 00:23:40.294 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.294 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.294 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.294 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.294 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.294 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.553 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.553 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.553 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.553 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.553 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.553 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:40.553 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.553 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:40.553 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.553 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:40.553 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:40.553 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:40.553 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEwNzU2MTMzMjIzMGVhMzgzNGZlM2IwODgyOWEyM2M0HJMG: 00:23:40.553 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: 00:23:40.553 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:40.553 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:40.553 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmEwNzU2MTMzMjIzMGVhMzgzNGZlM2IwODgyOWEyM2M0HJMG: 00:23:40.553 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: ]] 00:23:40.553 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmZkZjY0MWNhMGY5OWEwZjQzYzc2NDFiMjhlMzZhNzMzMjA0OTg0OTJmODQzYmFmN2UxZTcyNThiMWRjZmJkMXQW6Ic=: 00:23:40.553 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:23:40.553 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.553 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:40.553 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:40.553 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:40.553 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.553 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:40.553 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.553 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.553 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.553 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.553 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:40.553 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:40.553 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:40.553 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.553 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.553 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:40.554 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.554 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:40.554 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:40.554 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:40.554 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:40.554 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.554 14:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.121 nvme0n1 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMxOTQ4YjNlOTBmNmU2NGRiMWU1MDc4NzYyMjBkNDRkYmEwOGFmYWVkZmU3YTE5qee+Fw==: 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMxOTQ4YjNlOTBmNmU2NGRiMWU1MDc4NzYyMjBkNDRkYmEwOGFmYWVkZmU3YTE5qee+Fw==: 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: ]] 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:41.121 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.122 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:41.122 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:41.122 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:41.122 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:41.122 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.122 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.689 nvme0n1 00:23:41.689 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.689 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.689 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.689 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.689 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.689 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.689 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.689 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.689 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.689 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.689 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.689 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.689 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:41.689 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.689 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:41.689 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:41.689 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:41.689 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjM4MmY0YTk0ZjBjNGIwOTE3Zjc5NDNiYTg2ZWI1NzACsaWW: 00:23:41.689 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: 00:23:41.690 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:41.690 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:41.690 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjM4MmY0YTk0ZjBjNGIwOTE3Zjc5NDNiYTg2ZWI1NzACsaWW: 00:23:41.690 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: ]] 00:23:41.690 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: 00:23:41.690 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:23:41.690 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.690 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:41.690 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:41.690 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:41.690 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.690 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:41.690 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.690 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.690 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.690 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.690 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:41.690 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:41.690 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:41.690 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.690 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.690 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:41.690 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.690 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:41.690 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:41.690 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:41.690 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:41.690 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.690 14:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.257 nvme0n1 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDUyOTEyY2JlNjM4NWZmNzlhNmNjMTY4NWNjYTkwMzMwZGE4OTQ1ZWUwYmI2YWE1uPw/tg==: 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDUyOTEyY2JlNjM4NWZmNzlhNmNjMTY4NWNjYTkwMzMwZGE4OTQ1ZWUwYmI2YWE1uPw/tg==: 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: ]] 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjA2ZmVkMzQ4NWFmZWRiZjA4NDAwMWVmY2RkZmUzYmVaqaWt: 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.257 14:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.827 nvme0n1 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDUyNmZhYThmMzM0ZGNlZTc2OTUzNTBiYWM2MGMwMGVlODU5ZWRjYzk0Mjk2NDY4Mjg0OGFlZmQ1NjcxNmM4OLW8SnY=: 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDUyNmZhYThmMzM0ZGNlZTc2OTUzNTBiYWM2MGMwMGVlODU5ZWRjYzk0Mjk2NDY4Mjg0OGFlZmQ1NjcxNmM4OLW8SnY=: 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.827 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.396 nvme0n1 00:23:43.396 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.396 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.396 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.396 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.396 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.396 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.396 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.396 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.396 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.396 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.396 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.396 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:43.396 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.396 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:43.396 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:43.396 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:43.396 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMxOTQ4YjNlOTBmNmU2NGRiMWU1MDc4NzYyMjBkNDRkYmEwOGFmYWVkZmU3YTE5qee+Fw==: 00:23:43.396 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: 00:23:43.396 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:43.396 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:43.396 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMxOTQ4YjNlOTBmNmU2NGRiMWU1MDc4NzYyMjBkNDRkYmEwOGFmYWVkZmU3YTE5qee+Fw==: 00:23:43.396 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: ]] 00:23:43.396 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: 00:23:43.396 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:43.396 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.396 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.655 request: 00:23:43.655 { 00:23:43.655 "name": "nvme0", 00:23:43.655 "trtype": "tcp", 00:23:43.655 "traddr": "10.0.0.1", 00:23:43.655 "adrfam": "ipv4", 00:23:43.655 "trsvcid": "4420", 00:23:43.655 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:43.655 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:43.655 "prchk_reftag": false, 00:23:43.655 "prchk_guard": false, 00:23:43.655 "hdgst": false, 00:23:43.655 "ddgst": false, 00:23:43.655 "allow_unrecognized_csi": false, 00:23:43.655 "method": "bdev_nvme_attach_controller", 00:23:43.655 "req_id": 1 00:23:43.655 } 00:23:43.655 Got JSON-RPC error response 00:23:43.655 response: 00:23:43.655 { 00:23:43.655 "code": -5, 00:23:43.655 "message": "Input/output error" 00:23:43.655 } 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.655 request: 00:23:43.655 { 00:23:43.655 "name": "nvme0", 00:23:43.655 "trtype": "tcp", 00:23:43.655 "traddr": "10.0.0.1", 00:23:43.655 "adrfam": "ipv4", 00:23:43.655 "trsvcid": "4420", 00:23:43.655 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:43.655 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:43.655 "prchk_reftag": false, 00:23:43.655 "prchk_guard": false, 00:23:43.655 "hdgst": false, 00:23:43.655 "ddgst": false, 00:23:43.655 "dhchap_key": "key2", 00:23:43.655 "allow_unrecognized_csi": false, 00:23:43.655 "method": "bdev_nvme_attach_controller", 00:23:43.655 "req_id": 1 00:23:43.655 } 00:23:43.655 Got JSON-RPC error response 00:23:43.655 response: 00:23:43.655 { 00:23:43.655 "code": -5, 00:23:43.655 "message": "Input/output error" 00:23:43.655 } 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:43.655 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.656 request: 00:23:43.656 { 00:23:43.656 "name": "nvme0", 00:23:43.656 "trtype": "tcp", 00:23:43.656 "traddr": "10.0.0.1", 00:23:43.656 "adrfam": "ipv4", 00:23:43.656 "trsvcid": "4420", 00:23:43.656 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:43.656 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:43.656 "prchk_reftag": false, 00:23:43.656 "prchk_guard": false, 00:23:43.656 "hdgst": false, 00:23:43.656 "ddgst": false, 00:23:43.656 "dhchap_key": "key1", 00:23:43.656 "dhchap_ctrlr_key": "ckey2", 00:23:43.656 "allow_unrecognized_csi": false, 00:23:43.656 "method": "bdev_nvme_attach_controller", 00:23:43.656 "req_id": 1 00:23:43.656 } 00:23:43.656 Got JSON-RPC error response 00:23:43.656 response: 00:23:43.656 { 00:23:43.656 "code": -5, 00:23:43.656 "message": "Input/output error" 00:23:43.656 } 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.656 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.914 nvme0n1 00:23:43.915 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.915 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:43.915 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.915 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:43.915 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:43.915 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:43.915 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjM4MmY0YTk0ZjBjNGIwOTE3Zjc5NDNiYTg2ZWI1NzACsaWW: 00:23:43.915 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: 00:23:43.915 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:43.915 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:43.915 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjM4MmY0YTk0ZjBjNGIwOTE3Zjc5NDNiYTg2ZWI1NzACsaWW: 00:23:43.915 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: ]] 00:23:43.915 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: 00:23:43.915 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:43.915 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.915 14:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.915 14:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.915 14:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.915 14:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.915 14:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:23:43.915 14:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.915 14:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.915 14:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.915 14:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:43.915 14:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:23:43.915 14:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:43.915 14:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:43.915 14:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:43.915 14:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:43.915 14:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:43.915 14:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:43.915 14:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.915 14:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.915 request: 00:23:43.915 { 00:23:43.915 "name": "nvme0", 00:23:43.915 "dhchap_key": "key1", 00:23:43.915 "dhchap_ctrlr_key": "ckey2", 00:23:43.915 "method": "bdev_nvme_set_keys", 00:23:43.915 "req_id": 1 00:23:43.915 } 00:23:43.915 Got JSON-RPC error response 00:23:43.915 response: 00:23:43.915 { 00:23:43.915 "code": -13, 00:23:43.915 "message": "Permission denied" 00:23:43.915 } 00:23:43.915 14:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:43.915 14:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:23:43.915 14:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:43.915 14:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:43.915 14:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:43.915 14:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:23:43.915 14:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.915 14:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.915 14:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.915 14:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.915 14:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:23:43.915 14:07:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:23:44.849 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.849 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:23:44.849 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.849 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.849 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTMxOTQ4YjNlOTBmNmU2NGRiMWU1MDc4NzYyMjBkNDRkYmEwOGFmYWVkZmU3YTE5qee+Fw==: 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTMxOTQ4YjNlOTBmNmU2NGRiMWU1MDc4NzYyMjBkNDRkYmEwOGFmYWVkZmU3YTE5qee+Fw==: 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: ]] 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWJhMDE4NGRiMDUwODI4NTFjZjNmNWQ2MjRjYzljNzY2MjdkMzM5MmEwMjU4NmJm0JqtVw==: 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.108 nvme0n1 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjM4MmY0YTk0ZjBjNGIwOTE3Zjc5NDNiYTg2ZWI1NzACsaWW: 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjM4MmY0YTk0ZjBjNGIwOTE3Zjc5NDNiYTg2ZWI1NzACsaWW: 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: ]] 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTA4OTJkYjM4NmZlZTY5NmE2NTcyYjRhNjkwZDJjMWP9bEOQ: 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.108 request: 00:23:45.108 { 00:23:45.108 "name": "nvme0", 00:23:45.108 "dhchap_key": "key2", 00:23:45.108 "dhchap_ctrlr_key": "ckey1", 00:23:45.108 "method": "bdev_nvme_set_keys", 00:23:45.108 "req_id": 1 00:23:45.108 } 00:23:45.108 Got JSON-RPC error response 00:23:45.108 response: 00:23:45.108 { 00:23:45.108 "code": -13, 00:23:45.108 "message": "Permission denied" 00:23:45.108 } 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:23:45.108 14:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:46.484 rmmod nvme_tcp 00:23:46.484 rmmod nvme_fabrics 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1010712 ']' 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1010712 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 1010712 ']' 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 1010712 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1010712 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1010712' 00:23:46.484 killing process with pid 1010712 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 1010712 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 1010712 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:46.484 14:07:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.388 14:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:48.388 14:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:48.388 14:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:48.388 14:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:23:48.388 14:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:23:48.388 14:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:23:48.388 14:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:48.388 14:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:48.389 14:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:48.389 14:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:48.389 14:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:23:48.389 14:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:23:48.648 14:07:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:51.183 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:23:51.183 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:23:51.183 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:23:51.183 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:23:51.183 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:23:51.183 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:23:51.183 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:23:51.183 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:23:51.183 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:23:51.183 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:23:51.183 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:23:51.183 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:23:51.183 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:23:51.183 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:23:51.183 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:23:51.183 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:23:51.183 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:23:51.183 14:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.p6R /tmp/spdk.key-null.Hfp /tmp/spdk.key-sha256.x5r /tmp/spdk.key-sha384.FVo /tmp/spdk.key-sha512.g5F /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:23:51.183 14:07:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:53.716 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:23:53.716 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:23:53.716 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:23:53.716 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:23:53.716 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:23:53.716 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:23:53.716 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:23:53.716 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:23:53.716 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:23:53.716 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:23:53.716 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:23:53.716 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:23:53.716 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:23:53.716 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:23:53.716 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:23:53.716 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:23:53.716 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:23:53.716 00:23:53.716 real 0m47.867s 00:23:53.716 user 0m41.810s 00:23:53.716 sys 0m11.355s 00:23:53.716 14:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.717 ************************************ 00:23:53.717 END TEST nvmf_auth_host 00:23:53.717 ************************************ 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.717 ************************************ 00:23:53.717 START TEST nvmf_digest 00:23:53.717 ************************************ 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:53.717 * Looking for test storage... 00:23:53.717 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:53.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.717 --rc genhtml_branch_coverage=1 00:23:53.717 --rc genhtml_function_coverage=1 00:23:53.717 --rc genhtml_legend=1 00:23:53.717 --rc geninfo_all_blocks=1 00:23:53.717 --rc geninfo_unexecuted_blocks=1 00:23:53.717 00:23:53.717 ' 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:53.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.717 --rc genhtml_branch_coverage=1 00:23:53.717 --rc genhtml_function_coverage=1 00:23:53.717 --rc genhtml_legend=1 00:23:53.717 --rc geninfo_all_blocks=1 00:23:53.717 --rc geninfo_unexecuted_blocks=1 00:23:53.717 00:23:53.717 ' 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:53.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.717 --rc genhtml_branch_coverage=1 00:23:53.717 --rc genhtml_function_coverage=1 00:23:53.717 --rc genhtml_legend=1 00:23:53.717 --rc geninfo_all_blocks=1 00:23:53.717 --rc geninfo_unexecuted_blocks=1 00:23:53.717 00:23:53.717 ' 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:53.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.717 --rc genhtml_branch_coverage=1 00:23:53.717 --rc genhtml_function_coverage=1 00:23:53.717 --rc genhtml_legend=1 00:23:53.717 --rc geninfo_all_blocks=1 00:23:53.717 --rc geninfo_unexecuted_blocks=1 00:23:53.717 00:23:53.717 ' 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:53.717 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:53.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:53.718 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:53.718 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:53.718 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:53.718 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:23:53.718 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:23:53.718 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:23:53.718 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:23:53.718 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:23:53.718 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:53.718 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:53.718 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:53.718 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:53.718 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:53.718 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.718 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:53.718 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.718 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:53.718 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:53.718 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:23:53.718 14:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:00.286 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:00.286 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:24:00.286 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:00.286 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:00.286 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:00.286 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:00.286 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:00.286 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:24:00.286 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:00.286 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:24:00.286 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:24:00.286 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:24:00.286 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:24:00.286 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:24:00.286 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:24:00.286 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:00.286 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:00.286 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:00.286 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:00.286 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:00.286 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:00.286 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:00.286 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:00.287 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:00.287 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:00.287 Found net devices under 0000:31:00.0: cvl_0_0 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:00.287 Found net devices under 0000:31:00.1: cvl_0_1 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:00.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:00.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:24:00.287 00:24:00.287 --- 10.0.0.2 ping statistics --- 00:24:00.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.287 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:00.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:00.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:24:00.287 00:24:00.287 --- 10.0.0.1 ping statistics --- 00:24:00.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.287 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:00.287 ************************************ 00:24:00.287 START TEST nvmf_digest_clean 00:24:00.287 ************************************ 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:00.287 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:00.288 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:00.288 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1026960 00:24:00.288 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1026960 00:24:00.288 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 1026960 ']' 00:24:00.288 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:00.288 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.288 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:00.288 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.288 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:00.288 14:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:00.288 [2024-11-06 14:07:38.632954] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:24:00.288 [2024-11-06 14:07:38.633016] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.288 [2024-11-06 14:07:38.723716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.288 [2024-11-06 14:07:38.773908] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.288 [2024-11-06 14:07:38.773957] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.288 [2024-11-06 14:07:38.773965] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.288 [2024-11-06 14:07:38.773972] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.288 [2024-11-06 14:07:38.773978] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.288 [2024-11-06 14:07:38.774776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.288 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:00.288 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:24:00.288 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:00.288 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:00.288 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:00.288 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:00.288 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:00.288 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:00.288 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:00.288 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.288 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:00.288 null0 00:24:00.288 [2024-11-06 14:07:39.520890] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:00.288 [2024-11-06 14:07:39.545096] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:00.288 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.288 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:00.288 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:00.288 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:00.288 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:00.288 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:00.288 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:00.288 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:00.288 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1027053 00:24:00.288 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1027053 /var/tmp/bperf.sock 00:24:00.288 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 1027053 ']' 00:24:00.288 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:00.288 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:00.288 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:00.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:00.288 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:00.288 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:00.288 14:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:00.548 [2024-11-06 14:07:39.584018] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:24:00.548 [2024-11-06 14:07:39.584067] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1027053 ] 00:24:00.548 [2024-11-06 14:07:39.661285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.548 [2024-11-06 14:07:39.698217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.122 14:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:01.122 14:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:24:01.122 14:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:01.122 14:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:01.122 14:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:01.426 14:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:01.426 14:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:01.729 nvme0n1 00:24:01.729 14:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:01.729 14:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:01.988 Running I/O for 2 seconds... 00:24:03.863 26704.00 IOPS, 104.31 MiB/s [2024-11-06T13:07:43.148Z] 27279.50 IOPS, 106.56 MiB/s 00:24:03.864 Latency(us) 00:24:03.864 [2024-11-06T13:07:43.148Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.864 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:03.864 nvme0n1 : 2.00 27297.73 106.63 0.00 0.00 4683.89 2061.65 15619.41 00:24:03.864 [2024-11-06T13:07:43.148Z] =================================================================================================================== 00:24:03.864 [2024-11-06T13:07:43.148Z] Total : 27297.73 106.63 0.00 0.00 4683.89 2061.65 15619.41 00:24:03.864 { 00:24:03.864 "results": [ 00:24:03.864 { 00:24:03.864 "job": "nvme0n1", 00:24:03.864 "core_mask": "0x2", 00:24:03.864 "workload": "randread", 00:24:03.864 "status": "finished", 00:24:03.864 "queue_depth": 128, 00:24:03.864 "io_size": 4096, 00:24:03.864 "runtime": 2.004013, 00:24:03.864 "iops": 27297.727110552674, 00:24:03.864 "mibps": 106.63174652559638, 00:24:03.864 "io_failed": 0, 00:24:03.864 "io_timeout": 0, 00:24:03.864 "avg_latency_us": 4683.892741553179, 00:24:03.864 "min_latency_us": 2061.653333333333, 00:24:03.864 "max_latency_us": 15619.413333333334 00:24:03.864 } 00:24:03.864 ], 00:24:03.864 "core_count": 1 00:24:03.864 } 00:24:03.864 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:03.864 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:03.864 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:03.864 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:03.864 | select(.opcode=="crc32c") 00:24:03.864 | "\(.module_name) \(.executed)"' 00:24:03.864 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:04.123 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:04.123 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:04.123 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:04.123 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:04.123 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1027053 00:24:04.123 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 1027053 ']' 00:24:04.123 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 1027053 00:24:04.123 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:24:04.123 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:04.123 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1027053 00:24:04.123 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:04.123 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:04.123 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1027053' 00:24:04.123 killing process with pid 1027053 00:24:04.123 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 1027053 00:24:04.123 Received shutdown signal, test time was about 2.000000 seconds 00:24:04.123 00:24:04.123 Latency(us) 00:24:04.123 [2024-11-06T13:07:43.407Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.123 [2024-11-06T13:07:43.407Z] =================================================================================================================== 00:24:04.123 [2024-11-06T13:07:43.407Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:04.123 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 1027053 00:24:04.123 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:04.123 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:04.123 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:04.123 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:04.123 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:04.123 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:04.123 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:04.123 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1028000 00:24:04.123 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1028000 /var/tmp/bperf.sock 00:24:04.123 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 1028000 ']' 00:24:04.123 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:04.123 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:04.123 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:04.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:04.123 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:04.123 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:04.123 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:04.382 [2024-11-06 14:07:43.429294] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:24:04.382 [2024-11-06 14:07:43.429351] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1028000 ] 00:24:04.382 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:04.382 Zero copy mechanism will not be used. 00:24:04.382 [2024-11-06 14:07:43.493504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.382 [2024-11-06 14:07:43.522854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:04.382 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:04.382 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:24:04.382 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:04.382 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:04.382 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:04.641 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:04.641 14:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:04.900 nvme0n1 00:24:04.900 14:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:04.900 14:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:04.900 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:04.900 Zero copy mechanism will not be used. 00:24:04.900 Running I/O for 2 seconds... 00:24:07.219 4813.00 IOPS, 601.62 MiB/s [2024-11-06T13:07:46.503Z] 4769.50 IOPS, 596.19 MiB/s 00:24:07.219 Latency(us) 00:24:07.219 [2024-11-06T13:07:46.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.219 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:07.219 nvme0n1 : 2.05 4672.34 584.04 0.00 0.00 3357.16 471.04 46749.01 00:24:07.219 [2024-11-06T13:07:46.503Z] =================================================================================================================== 00:24:07.219 [2024-11-06T13:07:46.503Z] Total : 4672.34 584.04 0.00 0.00 3357.16 471.04 46749.01 00:24:07.219 { 00:24:07.219 "results": [ 00:24:07.219 { 00:24:07.219 "job": "nvme0n1", 00:24:07.219 "core_mask": "0x2", 00:24:07.219 "workload": "randread", 00:24:07.219 "status": "finished", 00:24:07.219 "queue_depth": 16, 00:24:07.219 "io_size": 131072, 00:24:07.219 "runtime": 2.045015, 00:24:07.219 "iops": 4672.3373667185815, 00:24:07.219 "mibps": 584.0421708398227, 00:24:07.219 "io_failed": 0, 00:24:07.219 "io_timeout": 0, 00:24:07.219 "avg_latency_us": 3357.1631913483343, 00:24:07.219 "min_latency_us": 471.04, 00:24:07.219 "max_latency_us": 46749.013333333336 00:24:07.219 } 00:24:07.219 ], 00:24:07.219 "core_count": 1 00:24:07.219 } 00:24:07.219 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:07.219 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:07.219 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:07.219 | select(.opcode=="crc32c") 00:24:07.219 | "\(.module_name) \(.executed)"' 00:24:07.219 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:07.219 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:07.219 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:07.219 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:07.219 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:07.219 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:07.219 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1028000 00:24:07.219 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 1028000 ']' 00:24:07.219 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 1028000 00:24:07.219 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:24:07.219 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:07.219 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1028000 00:24:07.219 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:07.219 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:07.219 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1028000' 00:24:07.219 killing process with pid 1028000 00:24:07.219 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 1028000 00:24:07.219 Received shutdown signal, test time was about 2.000000 seconds 00:24:07.219 00:24:07.219 Latency(us) 00:24:07.219 [2024-11-06T13:07:46.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.219 [2024-11-06T13:07:46.503Z] =================================================================================================================== 00:24:07.219 [2024-11-06T13:07:46.503Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:07.219 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 1028000 00:24:07.479 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:07.479 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:07.479 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:07.479 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:07.479 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:07.479 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:07.479 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:07.479 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1028675 00:24:07.479 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1028675 /var/tmp/bperf.sock 00:24:07.479 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 1028675 ']' 00:24:07.479 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:07.479 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:07.479 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:07.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:07.479 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:07.479 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:07.479 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:07.479 [2024-11-06 14:07:46.580130] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:24:07.479 [2024-11-06 14:07:46.580184] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1028675 ] 00:24:07.479 [2024-11-06 14:07:46.644741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.479 [2024-11-06 14:07:46.672281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:07.479 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:07.479 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:24:07.479 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:07.479 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:07.479 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:07.738 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:07.738 14:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:07.997 nvme0n1 00:24:07.997 14:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:07.997 14:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:07.997 Running I/O for 2 seconds... 00:24:10.313 30129.00 IOPS, 117.69 MiB/s [2024-11-06T13:07:49.597Z] 29880.50 IOPS, 116.72 MiB/s 00:24:10.313 Latency(us) 00:24:10.313 [2024-11-06T13:07:49.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.313 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:10.313 nvme0n1 : 2.01 29877.80 116.71 0.00 0.00 4276.64 2061.65 10103.47 00:24:10.313 [2024-11-06T13:07:49.597Z] =================================================================================================================== 00:24:10.313 [2024-11-06T13:07:49.597Z] Total : 29877.80 116.71 0.00 0.00 4276.64 2061.65 10103.47 00:24:10.313 { 00:24:10.313 "results": [ 00:24:10.313 { 00:24:10.313 "job": "nvme0n1", 00:24:10.313 "core_mask": "0x2", 00:24:10.313 "workload": "randwrite", 00:24:10.313 "status": "finished", 00:24:10.313 "queue_depth": 128, 00:24:10.313 "io_size": 4096, 00:24:10.313 "runtime": 2.005536, 00:24:10.313 "iops": 29877.798254431735, 00:24:10.313 "mibps": 116.71014943137396, 00:24:10.313 "io_failed": 0, 00:24:10.313 "io_timeout": 0, 00:24:10.313 "avg_latency_us": 4276.636486930014, 00:24:10.313 "min_latency_us": 2061.653333333333, 00:24:10.313 "max_latency_us": 10103.466666666667 00:24:10.313 } 00:24:10.313 ], 00:24:10.313 "core_count": 1 00:24:10.313 } 00:24:10.313 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:10.313 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:10.313 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:10.313 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:10.313 | select(.opcode=="crc32c") 00:24:10.313 | "\(.module_name) \(.executed)"' 00:24:10.313 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:10.313 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:10.313 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:10.313 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:10.313 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:10.313 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1028675 00:24:10.313 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 1028675 ']' 00:24:10.313 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 1028675 00:24:10.313 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:24:10.313 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:10.313 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1028675 00:24:10.313 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:10.313 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:10.313 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1028675' 00:24:10.313 killing process with pid 1028675 00:24:10.313 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 1028675 00:24:10.313 Received shutdown signal, test time was about 2.000000 seconds 00:24:10.313 00:24:10.313 Latency(us) 00:24:10.313 [2024-11-06T13:07:49.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.313 [2024-11-06T13:07:49.597Z] =================================================================================================================== 00:24:10.313 [2024-11-06T13:07:49.597Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:10.313 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 1028675 00:24:10.313 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:10.313 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:10.313 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:10.313 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:10.313 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:10.313 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:10.313 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:10.313 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1029353 00:24:10.313 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1029353 /var/tmp/bperf.sock 00:24:10.313 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 1029353 ']' 00:24:10.313 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:10.313 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:10.313 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:10.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:10.313 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:10.314 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:10.314 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:10.572 [2024-11-06 14:07:49.612951] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:24:10.572 [2024-11-06 14:07:49.612996] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1029353 ] 00:24:10.572 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:10.572 Zero copy mechanism will not be used. 00:24:10.572 [2024-11-06 14:07:49.668227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.572 [2024-11-06 14:07:49.697523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.572 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:10.572 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:24:10.572 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:10.572 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:10.572 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:10.830 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:10.830 14:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:11.089 nvme0n1 00:24:11.089 14:07:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:11.089 14:07:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:11.348 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:11.348 Zero copy mechanism will not be used. 00:24:11.348 Running I/O for 2 seconds... 00:24:13.223 4118.00 IOPS, 514.75 MiB/s [2024-11-06T13:07:52.507Z] 4639.00 IOPS, 579.88 MiB/s 00:24:13.223 Latency(us) 00:24:13.223 [2024-11-06T13:07:52.507Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.223 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:13.223 nvme0n1 : 2.00 4636.05 579.51 0.00 0.00 3445.16 1181.01 16930.13 00:24:13.223 [2024-11-06T13:07:52.507Z] =================================================================================================================== 00:24:13.223 [2024-11-06T13:07:52.507Z] Total : 4636.05 579.51 0.00 0.00 3445.16 1181.01 16930.13 00:24:13.223 { 00:24:13.223 "results": [ 00:24:13.223 { 00:24:13.223 "job": "nvme0n1", 00:24:13.223 "core_mask": "0x2", 00:24:13.223 "workload": "randwrite", 00:24:13.223 "status": "finished", 00:24:13.223 "queue_depth": 16, 00:24:13.223 "io_size": 131072, 00:24:13.223 "runtime": 2.004722, 00:24:13.223 "iops": 4636.054275854707, 00:24:13.223 "mibps": 579.5067844818384, 00:24:13.223 "io_failed": 0, 00:24:13.223 "io_timeout": 0, 00:24:13.223 "avg_latency_us": 3445.161730148483, 00:24:13.223 "min_latency_us": 1181.0133333333333, 00:24:13.223 "max_latency_us": 16930.133333333335 00:24:13.223 } 00:24:13.223 ], 00:24:13.223 "core_count": 1 00:24:13.223 } 00:24:13.223 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:13.223 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:13.223 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:13.223 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:13.223 | select(.opcode=="crc32c") 00:24:13.223 | "\(.module_name) \(.executed)"' 00:24:13.223 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:13.483 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:13.483 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:13.483 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:13.483 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:13.483 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1029353 00:24:13.483 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 1029353 ']' 00:24:13.483 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 1029353 00:24:13.483 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:24:13.483 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:13.483 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1029353 00:24:13.483 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:13.483 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:13.483 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1029353' 00:24:13.483 killing process with pid 1029353 00:24:13.483 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 1029353 00:24:13.483 Received shutdown signal, test time was about 2.000000 seconds 00:24:13.483 00:24:13.483 Latency(us) 00:24:13.483 [2024-11-06T13:07:52.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.483 [2024-11-06T13:07:52.767Z] =================================================================================================================== 00:24:13.483 [2024-11-06T13:07:52.767Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:13.483 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 1029353 00:24:13.483 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1026960 00:24:13.483 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 1026960 ']' 00:24:13.483 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 1026960 00:24:13.483 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:24:13.483 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:13.483 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1026960 00:24:13.742 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:13.742 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:13.742 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1026960' 00:24:13.743 killing process with pid 1026960 00:24:13.743 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 1026960 00:24:13.743 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 1026960 00:24:13.743 00:24:13.743 real 0m14.292s 00:24:13.743 user 0m27.867s 00:24:13.743 sys 0m2.992s 00:24:13.743 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:13.743 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:13.743 ************************************ 00:24:13.743 END TEST nvmf_digest_clean 00:24:13.743 ************************************ 00:24:13.743 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:13.743 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:13.743 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:13.743 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:13.743 ************************************ 00:24:13.743 START TEST nvmf_digest_error 00:24:13.743 ************************************ 00:24:13.743 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:24:13.743 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:13.743 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:13.743 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:13.743 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:13.743 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1030055 00:24:13.743 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1030055 00:24:13.743 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 1030055 ']' 00:24:13.743 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.743 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:13.743 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.743 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:13.743 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:13.743 14:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:13.743 [2024-11-06 14:07:52.971423] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:24:13.743 [2024-11-06 14:07:52.971472] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:14.003 [2024-11-06 14:07:53.046154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.003 [2024-11-06 14:07:53.074161] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:14.003 [2024-11-06 14:07:53.074187] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:14.003 [2024-11-06 14:07:53.074193] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:14.003 [2024-11-06 14:07:53.074198] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:14.003 [2024-11-06 14:07:53.074202] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:14.003 [2024-11-06 14:07:53.074703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.003 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:14.003 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:24:14.003 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:14.003 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:14.003 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:14.003 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:14.003 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:14.003 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.003 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:14.003 [2024-11-06 14:07:53.123032] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:14.003 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.003 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:24:14.003 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:24:14.003 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.003 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:14.003 null0 00:24:14.003 [2024-11-06 14:07:53.198284] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:14.003 [2024-11-06 14:07:53.222468] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:14.003 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.003 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:14.003 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:14.003 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:14.003 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:14.003 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:14.003 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1030079 00:24:14.003 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1030079 /var/tmp/bperf.sock 00:24:14.003 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 1030079 ']' 00:24:14.003 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:14.003 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:14.003 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:14.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:14.003 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:14.003 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:14.003 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:14.003 [2024-11-06 14:07:53.261238] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:24:14.003 [2024-11-06 14:07:53.261290] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1030079 ] 00:24:14.263 [2024-11-06 14:07:53.325492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.263 [2024-11-06 14:07:53.355279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:14.263 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:14.263 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:24:14.263 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:14.263 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:14.522 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:14.522 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.522 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:14.522 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.522 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:14.522 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:14.783 nvme0n1 00:24:14.783 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:14.783 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.783 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:14.783 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.783 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:14.783 14:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:14.783 Running I/O for 2 seconds... 00:24:14.783 [2024-11-06 14:07:53.957517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:14.783 [2024-11-06 14:07:53.957546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.783 [2024-11-06 14:07:53.957555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.783 [2024-11-06 14:07:53.966335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:14.783 [2024-11-06 14:07:53.966355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.783 [2024-11-06 14:07:53.966362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.783 [2024-11-06 14:07:53.975720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:14.783 [2024-11-06 14:07:53.975740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.783 [2024-11-06 14:07:53.975747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.783 [2024-11-06 14:07:53.985343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:14.783 [2024-11-06 14:07:53.985361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.783 [2024-11-06 14:07:53.985368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.783 [2024-11-06 14:07:53.994593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:14.783 [2024-11-06 14:07:53.994611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.783 [2024-11-06 14:07:53.994617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.783 [2024-11-06 14:07:54.003049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:14.783 [2024-11-06 14:07:54.003067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.783 [2024-11-06 14:07:54.003073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.783 [2024-11-06 14:07:54.012588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:14.783 [2024-11-06 14:07:54.012605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.783 [2024-11-06 14:07:54.012612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.783 [2024-11-06 14:07:54.020274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:14.783 [2024-11-06 14:07:54.020292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.783 [2024-11-06 14:07:54.020299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.783 [2024-11-06 14:07:54.030407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:14.783 [2024-11-06 14:07:54.030425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.783 [2024-11-06 14:07:54.030435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.783 [2024-11-06 14:07:54.039934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:14.783 [2024-11-06 14:07:54.039951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.783 [2024-11-06 14:07:54.039958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.783 [2024-11-06 14:07:54.048197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:14.783 [2024-11-06 14:07:54.048215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.783 [2024-11-06 14:07:54.048221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.783 [2024-11-06 14:07:54.058249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:14.783 [2024-11-06 14:07:54.058266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.783 [2024-11-06 14:07:54.058273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.044 [2024-11-06 14:07:54.067739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.044 [2024-11-06 14:07:54.067757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.044 [2024-11-06 14:07:54.067763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.044 [2024-11-06 14:07:54.076640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.044 [2024-11-06 14:07:54.076656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.044 [2024-11-06 14:07:54.076662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.044 [2024-11-06 14:07:54.086279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.044 [2024-11-06 14:07:54.086296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.044 [2024-11-06 14:07:54.086302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.044 [2024-11-06 14:07:54.096343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.044 [2024-11-06 14:07:54.096360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.044 [2024-11-06 14:07:54.096367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.044 [2024-11-06 14:07:54.104884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.044 [2024-11-06 14:07:54.104901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.044 [2024-11-06 14:07:54.104907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.044 [2024-11-06 14:07:54.114395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.044 [2024-11-06 14:07:54.114415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.044 [2024-11-06 14:07:54.114421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.044 [2024-11-06 14:07:54.121814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.044 [2024-11-06 14:07:54.121831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.044 [2024-11-06 14:07:54.121837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.044 [2024-11-06 14:07:54.132792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.044 [2024-11-06 14:07:54.132809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.044 [2024-11-06 14:07:54.132815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.044 [2024-11-06 14:07:54.144288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.044 [2024-11-06 14:07:54.144306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.044 [2024-11-06 14:07:54.144312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.044 [2024-11-06 14:07:54.155891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.044 [2024-11-06 14:07:54.155907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.044 [2024-11-06 14:07:54.155913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.044 [2024-11-06 14:07:54.164295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.044 [2024-11-06 14:07:54.164311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.044 [2024-11-06 14:07:54.164317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.044 [2024-11-06 14:07:54.174193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.044 [2024-11-06 14:07:54.174210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.044 [2024-11-06 14:07:54.174216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.044 [2024-11-06 14:07:54.183413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.044 [2024-11-06 14:07:54.183431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.044 [2024-11-06 14:07:54.183437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.044 [2024-11-06 14:07:54.191863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.044 [2024-11-06 14:07:54.191880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.044 [2024-11-06 14:07:54.191887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.044 [2024-11-06 14:07:54.200975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.044 [2024-11-06 14:07:54.200992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.044 [2024-11-06 14:07:54.200998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.044 [2024-11-06 14:07:54.209120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.044 [2024-11-06 14:07:54.209137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.044 [2024-11-06 14:07:54.209143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.044 [2024-11-06 14:07:54.218700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.044 [2024-11-06 14:07:54.218717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.044 [2024-11-06 14:07:54.218723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.044 [2024-11-06 14:07:54.227026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.044 [2024-11-06 14:07:54.227043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.044 [2024-11-06 14:07:54.227049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.044 [2024-11-06 14:07:54.235789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.044 [2024-11-06 14:07:54.235806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.045 [2024-11-06 14:07:54.235812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.045 [2024-11-06 14:07:54.245654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.045 [2024-11-06 14:07:54.245672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.045 [2024-11-06 14:07:54.245678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.045 [2024-11-06 14:07:54.254479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.045 [2024-11-06 14:07:54.254496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.045 [2024-11-06 14:07:54.254503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.045 [2024-11-06 14:07:54.265444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.045 [2024-11-06 14:07:54.265461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.045 [2024-11-06 14:07:54.265467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.045 [2024-11-06 14:07:54.277442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.045 [2024-11-06 14:07:54.277459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.045 [2024-11-06 14:07:54.277468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.045 [2024-11-06 14:07:54.285885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.045 [2024-11-06 14:07:54.285902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.045 [2024-11-06 14:07:54.285908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.045 [2024-11-06 14:07:54.295400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.045 [2024-11-06 14:07:54.295418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.045 [2024-11-06 14:07:54.295424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.045 [2024-11-06 14:07:54.303874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.045 [2024-11-06 14:07:54.303891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.045 [2024-11-06 14:07:54.303898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.045 [2024-11-06 14:07:54.312234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.045 [2024-11-06 14:07:54.312255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.045 [2024-11-06 14:07:54.312261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.045 [2024-11-06 14:07:54.322950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.045 [2024-11-06 14:07:54.322967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.045 [2024-11-06 14:07:54.322973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.305 [2024-11-06 14:07:54.331880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.305 [2024-11-06 14:07:54.331897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.305 [2024-11-06 14:07:54.331903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.305 [2024-11-06 14:07:54.339457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.305 [2024-11-06 14:07:54.339474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.305 [2024-11-06 14:07:54.339480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.305 [2024-11-06 14:07:54.350271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.305 [2024-11-06 14:07:54.350287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.305 [2024-11-06 14:07:54.350294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.305 [2024-11-06 14:07:54.358124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.305 [2024-11-06 14:07:54.358140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.305 [2024-11-06 14:07:54.358147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.305 [2024-11-06 14:07:54.370011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.305 [2024-11-06 14:07:54.370028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.305 [2024-11-06 14:07:54.370035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.305 [2024-11-06 14:07:54.379774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.305 [2024-11-06 14:07:54.379792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.305 [2024-11-06 14:07:54.379798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.305 [2024-11-06 14:07:54.387887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.305 [2024-11-06 14:07:54.387904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.305 [2024-11-06 14:07:54.387911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.305 [2024-11-06 14:07:54.399147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.305 [2024-11-06 14:07:54.399164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.305 [2024-11-06 14:07:54.399171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.305 [2024-11-06 14:07:54.410168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.305 [2024-11-06 14:07:54.410185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.305 [2024-11-06 14:07:54.410191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.305 [2024-11-06 14:07:54.420632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.305 [2024-11-06 14:07:54.420649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.306 [2024-11-06 14:07:54.420655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.306 [2024-11-06 14:07:54.428908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.306 [2024-11-06 14:07:54.428925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.306 [2024-11-06 14:07:54.428931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.306 [2024-11-06 14:07:54.438445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.306 [2024-11-06 14:07:54.438462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.306 [2024-11-06 14:07:54.438471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.306 [2024-11-06 14:07:54.447357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.306 [2024-11-06 14:07:54.447374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.306 [2024-11-06 14:07:54.447380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.306 [2024-11-06 14:07:54.457454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.306 [2024-11-06 14:07:54.457471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.306 [2024-11-06 14:07:54.457477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.306 [2024-11-06 14:07:54.466430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.306 [2024-11-06 14:07:54.466447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.306 [2024-11-06 14:07:54.466454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.306 [2024-11-06 14:07:54.475492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.306 [2024-11-06 14:07:54.475508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.306 [2024-11-06 14:07:54.475515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.306 [2024-11-06 14:07:54.483582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.306 [2024-11-06 14:07:54.483599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.306 [2024-11-06 14:07:54.483605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.306 [2024-11-06 14:07:54.494115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.306 [2024-11-06 14:07:54.494132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.306 [2024-11-06 14:07:54.494138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.306 [2024-11-06 14:07:54.503262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.306 [2024-11-06 14:07:54.503279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.306 [2024-11-06 14:07:54.503285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.306 [2024-11-06 14:07:54.513441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.306 [2024-11-06 14:07:54.513458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.306 [2024-11-06 14:07:54.513464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.306 [2024-11-06 14:07:54.521967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.306 [2024-11-06 14:07:54.521987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.306 [2024-11-06 14:07:54.521993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.306 [2024-11-06 14:07:54.533777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.306 [2024-11-06 14:07:54.533794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:40 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.306 [2024-11-06 14:07:54.533801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.306 [2024-11-06 14:07:54.543516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.306 [2024-11-06 14:07:54.543533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.306 [2024-11-06 14:07:54.543539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.306 [2024-11-06 14:07:54.552105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.306 [2024-11-06 14:07:54.552122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.306 [2024-11-06 14:07:54.552128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.306 [2024-11-06 14:07:54.560796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.306 [2024-11-06 14:07:54.560813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.306 [2024-11-06 14:07:54.560820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.306 [2024-11-06 14:07:54.569439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.306 [2024-11-06 14:07:54.569455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.306 [2024-11-06 14:07:54.569461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.306 [2024-11-06 14:07:54.577921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.306 [2024-11-06 14:07:54.577937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.306 [2024-11-06 14:07:54.577943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.306 [2024-11-06 14:07:54.586668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.306 [2024-11-06 14:07:54.586685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.306 [2024-11-06 14:07:54.586691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.568 [2024-11-06 14:07:54.595427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.568 [2024-11-06 14:07:54.595444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.568 [2024-11-06 14:07:54.595450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.568 [2024-11-06 14:07:54.604030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.568 [2024-11-06 14:07:54.604046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.568 [2024-11-06 14:07:54.604053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.568 [2024-11-06 14:07:54.613653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.568 [2024-11-06 14:07:54.613670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.568 [2024-11-06 14:07:54.613676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.568 [2024-11-06 14:07:54.622046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.568 [2024-11-06 14:07:54.622063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.568 [2024-11-06 14:07:54.622070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.568 [2024-11-06 14:07:54.631388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.568 [2024-11-06 14:07:54.631404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.568 [2024-11-06 14:07:54.631411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.568 [2024-11-06 14:07:54.640440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.568 [2024-11-06 14:07:54.640456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.568 [2024-11-06 14:07:54.640463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.568 [2024-11-06 14:07:54.649506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.568 [2024-11-06 14:07:54.649523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.568 [2024-11-06 14:07:54.649530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.568 [2024-11-06 14:07:54.657899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.568 [2024-11-06 14:07:54.657916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.568 [2024-11-06 14:07:54.657922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.568 [2024-11-06 14:07:54.668943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.568 [2024-11-06 14:07:54.668960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.568 [2024-11-06 14:07:54.668966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.568 [2024-11-06 14:07:54.678528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.568 [2024-11-06 14:07:54.678544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.568 [2024-11-06 14:07:54.678553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.568 [2024-11-06 14:07:54.687927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.568 [2024-11-06 14:07:54.687943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.568 [2024-11-06 14:07:54.687950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.568 [2024-11-06 14:07:54.696416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.568 [2024-11-06 14:07:54.696433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.568 [2024-11-06 14:07:54.696439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.568 [2024-11-06 14:07:54.704268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.568 [2024-11-06 14:07:54.704284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.568 [2024-11-06 14:07:54.704291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.568 [2024-11-06 14:07:54.713804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.568 [2024-11-06 14:07:54.713821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.568 [2024-11-06 14:07:54.713828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.568 [2024-11-06 14:07:54.722872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.568 [2024-11-06 14:07:54.722889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.568 [2024-11-06 14:07:54.722895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.568 [2024-11-06 14:07:54.731248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.568 [2024-11-06 14:07:54.731265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.568 [2024-11-06 14:07:54.731271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.568 [2024-11-06 14:07:54.739439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.568 [2024-11-06 14:07:54.739456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.568 [2024-11-06 14:07:54.739462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.568 [2024-11-06 14:07:54.748941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.568 [2024-11-06 14:07:54.748958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.568 [2024-11-06 14:07:54.748965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.568 [2024-11-06 14:07:54.757897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.568 [2024-11-06 14:07:54.757917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.568 [2024-11-06 14:07:54.757924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.568 [2024-11-06 14:07:54.768148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.568 [2024-11-06 14:07:54.768164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.568 [2024-11-06 14:07:54.768171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.568 [2024-11-06 14:07:54.776294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.568 [2024-11-06 14:07:54.776311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.568 [2024-11-06 14:07:54.776317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.568 [2024-11-06 14:07:54.787567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.568 [2024-11-06 14:07:54.787582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.569 [2024-11-06 14:07:54.787588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.569 [2024-11-06 14:07:54.799329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.569 [2024-11-06 14:07:54.799346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.569 [2024-11-06 14:07:54.799352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.569 [2024-11-06 14:07:54.808864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.569 [2024-11-06 14:07:54.808880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.569 [2024-11-06 14:07:54.808886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.569 [2024-11-06 14:07:54.817951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.569 [2024-11-06 14:07:54.817968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.569 [2024-11-06 14:07:54.817974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.569 [2024-11-06 14:07:54.827865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.569 [2024-11-06 14:07:54.827882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.569 [2024-11-06 14:07:54.827888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.569 [2024-11-06 14:07:54.835426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.569 [2024-11-06 14:07:54.835443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.569 [2024-11-06 14:07:54.835449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.569 [2024-11-06 14:07:54.846201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.569 [2024-11-06 14:07:54.846218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.569 [2024-11-06 14:07:54.846224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.830 [2024-11-06 14:07:54.854375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.830 [2024-11-06 14:07:54.854392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.830 [2024-11-06 14:07:54.854398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.830 [2024-11-06 14:07:54.863881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.830 [2024-11-06 14:07:54.863898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.830 [2024-11-06 14:07:54.863904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.830 [2024-11-06 14:07:54.874274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.830 [2024-11-06 14:07:54.874291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.830 [2024-11-06 14:07:54.874297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.830 [2024-11-06 14:07:54.883899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.830 [2024-11-06 14:07:54.883916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.830 [2024-11-06 14:07:54.883922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.830 [2024-11-06 14:07:54.894856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.830 [2024-11-06 14:07:54.894874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.830 [2024-11-06 14:07:54.894880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.830 [2024-11-06 14:07:54.905327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.830 [2024-11-06 14:07:54.905345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.830 [2024-11-06 14:07:54.905351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.830 [2024-11-06 14:07:54.914799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.830 [2024-11-06 14:07:54.914816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.830 [2024-11-06 14:07:54.914823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.830 [2024-11-06 14:07:54.923003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.830 [2024-11-06 14:07:54.923023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.830 [2024-11-06 14:07:54.923029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.830 [2024-11-06 14:07:54.932878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.830 [2024-11-06 14:07:54.932895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.830 [2024-11-06 14:07:54.932901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.830 26852.00 IOPS, 104.89 MiB/s [2024-11-06T13:07:55.114Z] [2024-11-06 14:07:54.942484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.830 [2024-11-06 14:07:54.942501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.830 [2024-11-06 14:07:54.942507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.830 [2024-11-06 14:07:54.952696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.830 [2024-11-06 14:07:54.952713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.830 [2024-11-06 14:07:54.952719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.830 [2024-11-06 14:07:54.964198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.830 [2024-11-06 14:07:54.964214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.830 [2024-11-06 14:07:54.964221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.830 [2024-11-06 14:07:54.972499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.830 [2024-11-06 14:07:54.972516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.830 [2024-11-06 14:07:54.972523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.830 [2024-11-06 14:07:54.982417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.830 [2024-11-06 14:07:54.982434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.830 [2024-11-06 14:07:54.982440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.830 [2024-11-06 14:07:54.991035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.830 [2024-11-06 14:07:54.991052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.830 [2024-11-06 14:07:54.991058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.830 [2024-11-06 14:07:55.002224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.830 [2024-11-06 14:07:55.002241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.830 [2024-11-06 14:07:55.002252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.830 [2024-11-06 14:07:55.009743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.830 [2024-11-06 14:07:55.009760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.830 [2024-11-06 14:07:55.009766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.830 [2024-11-06 14:07:55.020191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.830 [2024-11-06 14:07:55.020209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.830 [2024-11-06 14:07:55.020215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.830 [2024-11-06 14:07:55.029271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.830 [2024-11-06 14:07:55.029287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.830 [2024-11-06 14:07:55.029293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.830 [2024-11-06 14:07:55.037766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.830 [2024-11-06 14:07:55.037783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.830 [2024-11-06 14:07:55.037789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.830 [2024-11-06 14:07:55.047339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.830 [2024-11-06 14:07:55.047356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.830 [2024-11-06 14:07:55.047362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.830 [2024-11-06 14:07:55.057136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.830 [2024-11-06 14:07:55.057153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.830 [2024-11-06 14:07:55.057159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.831 [2024-11-06 14:07:55.065850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.831 [2024-11-06 14:07:55.065867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.831 [2024-11-06 14:07:55.065874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.831 [2024-11-06 14:07:55.075529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.831 [2024-11-06 14:07:55.075546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.831 [2024-11-06 14:07:55.075552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.831 [2024-11-06 14:07:55.085381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.831 [2024-11-06 14:07:55.085399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.831 [2024-11-06 14:07:55.085411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.831 [2024-11-06 14:07:55.093900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.831 [2024-11-06 14:07:55.093917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.831 [2024-11-06 14:07:55.093923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.831 [2024-11-06 14:07:55.102841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.831 [2024-11-06 14:07:55.102859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.831 [2024-11-06 14:07:55.102865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.831 [2024-11-06 14:07:55.112366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:15.831 [2024-11-06 14:07:55.112384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.831 [2024-11-06 14:07:55.112390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.091 [2024-11-06 14:07:55.121413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.091 [2024-11-06 14:07:55.121430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.091 [2024-11-06 14:07:55.121437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.092 [2024-11-06 14:07:55.130256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.092 [2024-11-06 14:07:55.130273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.092 [2024-11-06 14:07:55.130279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.092 [2024-11-06 14:07:55.138969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.092 [2024-11-06 14:07:55.138987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.092 [2024-11-06 14:07:55.138993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.092 [2024-11-06 14:07:55.147534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.092 [2024-11-06 14:07:55.147552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.092 [2024-11-06 14:07:55.147559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.092 [2024-11-06 14:07:55.156392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.092 [2024-11-06 14:07:55.156409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.092 [2024-11-06 14:07:55.156416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.092 [2024-11-06 14:07:55.165916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.092 [2024-11-06 14:07:55.165936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.092 [2024-11-06 14:07:55.165942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.092 [2024-11-06 14:07:55.174273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.092 [2024-11-06 14:07:55.174290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.092 [2024-11-06 14:07:55.174296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.092 [2024-11-06 14:07:55.183623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.092 [2024-11-06 14:07:55.183640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.092 [2024-11-06 14:07:55.183646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.092 [2024-11-06 14:07:55.192442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.092 [2024-11-06 14:07:55.192459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.092 [2024-11-06 14:07:55.192465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.092 [2024-11-06 14:07:55.201142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.092 [2024-11-06 14:07:55.201160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.092 [2024-11-06 14:07:55.201166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.092 [2024-11-06 14:07:55.210256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.092 [2024-11-06 14:07:55.210273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.092 [2024-11-06 14:07:55.210280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.092 [2024-11-06 14:07:55.218356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.092 [2024-11-06 14:07:55.218373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.092 [2024-11-06 14:07:55.218379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.092 [2024-11-06 14:07:55.227622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.092 [2024-11-06 14:07:55.227638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.092 [2024-11-06 14:07:55.227645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.092 [2024-11-06 14:07:55.236152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.092 [2024-11-06 14:07:55.236169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.092 [2024-11-06 14:07:55.236178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.092 [2024-11-06 14:07:55.245200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.092 [2024-11-06 14:07:55.245217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.092 [2024-11-06 14:07:55.245223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.092 [2024-11-06 14:07:55.255733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.092 [2024-11-06 14:07:55.255751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.092 [2024-11-06 14:07:55.255757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.092 [2024-11-06 14:07:55.264254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.092 [2024-11-06 14:07:55.264271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:56 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.092 [2024-11-06 14:07:55.264277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.092 [2024-11-06 14:07:55.276044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.092 [2024-11-06 14:07:55.276061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.092 [2024-11-06 14:07:55.276067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.092 [2024-11-06 14:07:55.287708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.092 [2024-11-06 14:07:55.287725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.092 [2024-11-06 14:07:55.287732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.092 [2024-11-06 14:07:55.297488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.092 [2024-11-06 14:07:55.297505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.092 [2024-11-06 14:07:55.297511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.092 [2024-11-06 14:07:55.306732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.092 [2024-11-06 14:07:55.306749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.092 [2024-11-06 14:07:55.306755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.092 [2024-11-06 14:07:55.314910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.092 [2024-11-06 14:07:55.314926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.092 [2024-11-06 14:07:55.314933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.092 [2024-11-06 14:07:55.323843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.092 [2024-11-06 14:07:55.323864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.092 [2024-11-06 14:07:55.323870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.092 [2024-11-06 14:07:55.334453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.092 [2024-11-06 14:07:55.334471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.092 [2024-11-06 14:07:55.334477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.092 [2024-11-06 14:07:55.344307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.092 [2024-11-06 14:07:55.344324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.092 [2024-11-06 14:07:55.344330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.092 [2024-11-06 14:07:55.352496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.092 [2024-11-06 14:07:55.352513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.092 [2024-11-06 14:07:55.352519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.092 [2024-11-06 14:07:55.361825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.092 [2024-11-06 14:07:55.361843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.093 [2024-11-06 14:07:55.361849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.093 [2024-11-06 14:07:55.372631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.093 [2024-11-06 14:07:55.372649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.093 [2024-11-06 14:07:55.372655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.353 [2024-11-06 14:07:55.379927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.353 [2024-11-06 14:07:55.379944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.353 [2024-11-06 14:07:55.379950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.353 [2024-11-06 14:07:55.389958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.353 [2024-11-06 14:07:55.389975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.353 [2024-11-06 14:07:55.389982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.353 [2024-11-06 14:07:55.398412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.353 [2024-11-06 14:07:55.398429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.353 [2024-11-06 14:07:55.398437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.353 [2024-11-06 14:07:55.407386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.353 [2024-11-06 14:07:55.407404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.353 [2024-11-06 14:07:55.407410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.354 [2024-11-06 14:07:55.418827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.354 [2024-11-06 14:07:55.418844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.354 [2024-11-06 14:07:55.418850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.354 [2024-11-06 14:07:55.431030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.354 [2024-11-06 14:07:55.431047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.354 [2024-11-06 14:07:55.431053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.354 [2024-11-06 14:07:55.442001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.354 [2024-11-06 14:07:55.442018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.354 [2024-11-06 14:07:55.442024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.354 [2024-11-06 14:07:55.450036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.354 [2024-11-06 14:07:55.450053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.354 [2024-11-06 14:07:55.450059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.354 [2024-11-06 14:07:55.460688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.354 [2024-11-06 14:07:55.460705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.354 [2024-11-06 14:07:55.460711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.354 [2024-11-06 14:07:55.469067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.354 [2024-11-06 14:07:55.469085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.354 [2024-11-06 14:07:55.469091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.354 [2024-11-06 14:07:55.477840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.354 [2024-11-06 14:07:55.477857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.354 [2024-11-06 14:07:55.477863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.354 [2024-11-06 14:07:55.487441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.354 [2024-11-06 14:07:55.487458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.354 [2024-11-06 14:07:55.487468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.354 [2024-11-06 14:07:55.498048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.354 [2024-11-06 14:07:55.498065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.354 [2024-11-06 14:07:55.498071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.354 [2024-11-06 14:07:55.510168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.354 [2024-11-06 14:07:55.510185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.354 [2024-11-06 14:07:55.510191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.354 [2024-11-06 14:07:55.519294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.354 [2024-11-06 14:07:55.519311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.354 [2024-11-06 14:07:55.519318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.354 [2024-11-06 14:07:55.531761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.354 [2024-11-06 14:07:55.531778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.354 [2024-11-06 14:07:55.531784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.354 [2024-11-06 14:07:55.539812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.354 [2024-11-06 14:07:55.539828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.354 [2024-11-06 14:07:55.539834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.354 [2024-11-06 14:07:55.548751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.354 [2024-11-06 14:07:55.548768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.354 [2024-11-06 14:07:55.548774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.354 [2024-11-06 14:07:55.556611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.354 [2024-11-06 14:07:55.556628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.354 [2024-11-06 14:07:55.556634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.354 [2024-11-06 14:07:55.567328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.354 [2024-11-06 14:07:55.567345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.354 [2024-11-06 14:07:55.567351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.354 [2024-11-06 14:07:55.577443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.354 [2024-11-06 14:07:55.577463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.354 [2024-11-06 14:07:55.577470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.354 [2024-11-06 14:07:55.585929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.354 [2024-11-06 14:07:55.585946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.354 [2024-11-06 14:07:55.585952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.354 [2024-11-06 14:07:55.595182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.354 [2024-11-06 14:07:55.595199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.354 [2024-11-06 14:07:55.595206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.354 [2024-11-06 14:07:55.603154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.354 [2024-11-06 14:07:55.603171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.354 [2024-11-06 14:07:55.603177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.354 [2024-11-06 14:07:55.612597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.354 [2024-11-06 14:07:55.612613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.354 [2024-11-06 14:07:55.612619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.354 [2024-11-06 14:07:55.620540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.354 [2024-11-06 14:07:55.620557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.354 [2024-11-06 14:07:55.620564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.354 [2024-11-06 14:07:55.629612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.354 [2024-11-06 14:07:55.629629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.354 [2024-11-06 14:07:55.629635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.615 [2024-11-06 14:07:55.639797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.615 [2024-11-06 14:07:55.639814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.615 [2024-11-06 14:07:55.639820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.615 [2024-11-06 14:07:55.648265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.615 [2024-11-06 14:07:55.648282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.615 [2024-11-06 14:07:55.648288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.615 [2024-11-06 14:07:55.657992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.615 [2024-11-06 14:07:55.658009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.615 [2024-11-06 14:07:55.658015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.615 [2024-11-06 14:07:55.666508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.615 [2024-11-06 14:07:55.666525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.615 [2024-11-06 14:07:55.666531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.615 [2024-11-06 14:07:55.675803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.615 [2024-11-06 14:07:55.675820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.615 [2024-11-06 14:07:55.675826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.616 [2024-11-06 14:07:55.684831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.616 [2024-11-06 14:07:55.684848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.616 [2024-11-06 14:07:55.684854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.616 [2024-11-06 14:07:55.693356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.616 [2024-11-06 14:07:55.693373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.616 [2024-11-06 14:07:55.693379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.616 [2024-11-06 14:07:55.702067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.616 [2024-11-06 14:07:55.702083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.616 [2024-11-06 14:07:55.702090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.616 [2024-11-06 14:07:55.711861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.616 [2024-11-06 14:07:55.711878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.616 [2024-11-06 14:07:55.711884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.616 [2024-11-06 14:07:55.719481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.616 [2024-11-06 14:07:55.719497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.616 [2024-11-06 14:07:55.719503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.616 [2024-11-06 14:07:55.729362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.616 [2024-11-06 14:07:55.729382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.616 [2024-11-06 14:07:55.729388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.616 [2024-11-06 14:07:55.737470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.616 [2024-11-06 14:07:55.737486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.616 [2024-11-06 14:07:55.737493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.616 [2024-11-06 14:07:55.746801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.616 [2024-11-06 14:07:55.746818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.616 [2024-11-06 14:07:55.746824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.616 [2024-11-06 14:07:55.757249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.616 [2024-11-06 14:07:55.757266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.616 [2024-11-06 14:07:55.757272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.616 [2024-11-06 14:07:55.765902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.616 [2024-11-06 14:07:55.765918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.616 [2024-11-06 14:07:55.765925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.616 [2024-11-06 14:07:55.775952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.616 [2024-11-06 14:07:55.775969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.616 [2024-11-06 14:07:55.775975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.616 [2024-11-06 14:07:55.786836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.616 [2024-11-06 14:07:55.786852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.616 [2024-11-06 14:07:55.786859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.616 [2024-11-06 14:07:55.796946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.616 [2024-11-06 14:07:55.796963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.616 [2024-11-06 14:07:55.796969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.616 [2024-11-06 14:07:55.807678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.616 [2024-11-06 14:07:55.807695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.616 [2024-11-06 14:07:55.807701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.616 [2024-11-06 14:07:55.815915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.616 [2024-11-06 14:07:55.815932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.616 [2024-11-06 14:07:55.815938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.616 [2024-11-06 14:07:55.825691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.616 [2024-11-06 14:07:55.825707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.616 [2024-11-06 14:07:55.825713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.616 [2024-11-06 14:07:55.834108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.616 [2024-11-06 14:07:55.834125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.616 [2024-11-06 14:07:55.834131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.616 [2024-11-06 14:07:55.842610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.616 [2024-11-06 14:07:55.842626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.616 [2024-11-06 14:07:55.842632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.616 [2024-11-06 14:07:55.851477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.616 [2024-11-06 14:07:55.851494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.616 [2024-11-06 14:07:55.851500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.616 [2024-11-06 14:07:55.861529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.616 [2024-11-06 14:07:55.861546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.616 [2024-11-06 14:07:55.861552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.616 [2024-11-06 14:07:55.870117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.616 [2024-11-06 14:07:55.870133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.616 [2024-11-06 14:07:55.870139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.616 [2024-11-06 14:07:55.879562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.616 [2024-11-06 14:07:55.879579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.616 [2024-11-06 14:07:55.879586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.616 [2024-11-06 14:07:55.888819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.616 [2024-11-06 14:07:55.888836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.616 [2024-11-06 14:07:55.888846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.616 [2024-11-06 14:07:55.898042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.616 [2024-11-06 14:07:55.898059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.616 [2024-11-06 14:07:55.898065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.876 [2024-11-06 14:07:55.906529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.876 [2024-11-06 14:07:55.906546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.876 [2024-11-06 14:07:55.906552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.876 [2024-11-06 14:07:55.915829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.876 [2024-11-06 14:07:55.915846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.876 [2024-11-06 14:07:55.915852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.876 [2024-11-06 14:07:55.924424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.876 [2024-11-06 14:07:55.924441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.876 [2024-11-06 14:07:55.924447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.876 [2024-11-06 14:07:55.934208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a3dd00) 00:24:16.876 [2024-11-06 14:07:55.934225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.876 [2024-11-06 14:07:55.934231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.876 27076.50 IOPS, 105.77 MiB/s 00:24:16.876 Latency(us) 00:24:16.876 [2024-11-06T13:07:56.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.877 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:16.877 nvme0n1 : 2.01 27077.57 105.77 0.00 0.00 4720.82 2211.84 20097.71 00:24:16.877 [2024-11-06T13:07:56.161Z] =================================================================================================================== 00:24:16.877 [2024-11-06T13:07:56.161Z] Total : 27077.57 105.77 0.00 0.00 4720.82 2211.84 20097.71 00:24:16.877 { 00:24:16.877 "results": [ 00:24:16.877 { 00:24:16.877 "job": "nvme0n1", 00:24:16.877 "core_mask": "0x2", 00:24:16.877 "workload": "randread", 00:24:16.877 "status": "finished", 00:24:16.877 "queue_depth": 128, 00:24:16.877 "io_size": 4096, 00:24:16.877 "runtime": 2.005165, 00:24:16.877 "iops": 27077.572169871306, 00:24:16.877 "mibps": 105.77176628855979, 00:24:16.877 "io_failed": 0, 00:24:16.877 "io_timeout": 0, 00:24:16.877 "avg_latency_us": 4720.822249808147, 00:24:16.877 "min_latency_us": 2211.84, 00:24:16.877 "max_latency_us": 20097.706666666665 00:24:16.877 } 00:24:16.877 ], 00:24:16.877 "core_count": 1 00:24:16.877 } 00:24:16.877 14:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:16.877 14:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:16.877 14:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:16.877 14:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:16.877 | .driver_specific 00:24:16.877 | .nvme_error 00:24:16.877 | .status_code 00:24:16.877 | .command_transient_transport_error' 00:24:16.877 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 212 > 0 )) 00:24:16.877 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1030079 00:24:16.877 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 1030079 ']' 00:24:16.877 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 1030079 00:24:16.877 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:24:16.877 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:16.877 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1030079 00:24:17.136 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:17.136 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:17.136 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1030079' 00:24:17.136 killing process with pid 1030079 00:24:17.136 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 1030079 00:24:17.136 Received shutdown signal, test time was about 2.000000 seconds 00:24:17.136 00:24:17.136 Latency(us) 00:24:17.136 [2024-11-06T13:07:56.420Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.136 [2024-11-06T13:07:56.420Z] =================================================================================================================== 00:24:17.136 [2024-11-06T13:07:56.420Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:17.136 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 1030079 00:24:17.136 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:17.136 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:17.136 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:17.136 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:17.136 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:17.136 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1030754 00:24:17.136 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1030754 /var/tmp/bperf.sock 00:24:17.136 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 1030754 ']' 00:24:17.136 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:17.136 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:17.137 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:17.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:17.137 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:17.137 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:17.137 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:17.137 [2024-11-06 14:07:56.296795] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:24:17.137 [2024-11-06 14:07:56.296854] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1030754 ] 00:24:17.137 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:17.137 Zero copy mechanism will not be used. 00:24:17.137 [2024-11-06 14:07:56.360587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.137 [2024-11-06 14:07:56.389910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.396 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:17.396 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:24:17.396 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:17.396 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:17.396 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:17.396 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.396 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:17.396 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.396 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:17.396 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:17.656 nvme0n1 00:24:17.656 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:17.656 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.656 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:17.656 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.656 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:17.656 14:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:17.917 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:17.917 Zero copy mechanism will not be used. 00:24:17.917 Running I/O for 2 seconds... 00:24:17.918 [2024-11-06 14:07:56.975012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.918 [2024-11-06 14:07:56.975043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.918 [2024-11-06 14:07:56.975052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.918 [2024-11-06 14:07:56.980220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.918 [2024-11-06 14:07:56.980242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.918 [2024-11-06 14:07:56.980255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.918 [2024-11-06 14:07:56.987420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.918 [2024-11-06 14:07:56.987440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.918 [2024-11-06 14:07:56.987447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.918 [2024-11-06 14:07:56.995340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.918 [2024-11-06 14:07:56.995358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.918 [2024-11-06 14:07:56.995365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.918 [2024-11-06 14:07:57.003152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.918 [2024-11-06 14:07:57.003170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.918 [2024-11-06 14:07:57.003177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.918 [2024-11-06 14:07:57.011713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.918 [2024-11-06 14:07:57.011731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.918 [2024-11-06 14:07:57.011738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.918 [2024-11-06 14:07:57.016667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.918 [2024-11-06 14:07:57.016686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.918 [2024-11-06 14:07:57.016692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.918 [2024-11-06 14:07:57.020870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.918 [2024-11-06 14:07:57.020888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.918 [2024-11-06 14:07:57.020895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.918 [2024-11-06 14:07:57.024729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.918 [2024-11-06 14:07:57.024747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.918 [2024-11-06 14:07:57.024753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.918 [2024-11-06 14:07:57.029050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.918 [2024-11-06 14:07:57.029068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.918 [2024-11-06 14:07:57.029074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.918 [2024-11-06 14:07:57.035479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.918 [2024-11-06 14:07:57.035497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.918 [2024-11-06 14:07:57.035508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.918 [2024-11-06 14:07:57.040153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.918 [2024-11-06 14:07:57.040172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.918 [2024-11-06 14:07:57.040178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.918 [2024-11-06 14:07:57.045347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.918 [2024-11-06 14:07:57.045365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.918 [2024-11-06 14:07:57.045371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.918 [2024-11-06 14:07:57.050710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.918 [2024-11-06 14:07:57.050728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.918 [2024-11-06 14:07:57.050734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.918 [2024-11-06 14:07:57.055386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.918 [2024-11-06 14:07:57.055403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.918 [2024-11-06 14:07:57.055409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.918 [2024-11-06 14:07:57.061849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.918 [2024-11-06 14:07:57.061866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.918 [2024-11-06 14:07:57.061873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.918 [2024-11-06 14:07:57.070253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.918 [2024-11-06 14:07:57.070271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.918 [2024-11-06 14:07:57.070277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.918 [2024-11-06 14:07:57.075659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.918 [2024-11-06 14:07:57.075676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.918 [2024-11-06 14:07:57.075682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.918 [2024-11-06 14:07:57.081075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.918 [2024-11-06 14:07:57.081092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.918 [2024-11-06 14:07:57.081098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.918 [2024-11-06 14:07:57.086319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.918 [2024-11-06 14:07:57.086340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.918 [2024-11-06 14:07:57.086346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.918 [2024-11-06 14:07:57.089633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.918 [2024-11-06 14:07:57.089650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.918 [2024-11-06 14:07:57.089657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.918 [2024-11-06 14:07:57.092986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.918 [2024-11-06 14:07:57.093004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.918 [2024-11-06 14:07:57.093011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.918 [2024-11-06 14:07:57.096352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.918 [2024-11-06 14:07:57.096370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.918 [2024-11-06 14:07:57.096376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.918 [2024-11-06 14:07:57.099751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.918 [2024-11-06 14:07:57.099769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.918 [2024-11-06 14:07:57.099775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.918 [2024-11-06 14:07:57.103239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.919 [2024-11-06 14:07:57.103262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.919 [2024-11-06 14:07:57.103268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.919 [2024-11-06 14:07:57.106789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.919 [2024-11-06 14:07:57.106806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.919 [2024-11-06 14:07:57.106813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.919 [2024-11-06 14:07:57.113253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.919 [2024-11-06 14:07:57.113271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.919 [2024-11-06 14:07:57.113278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.919 [2024-11-06 14:07:57.118615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.919 [2024-11-06 14:07:57.118634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.919 [2024-11-06 14:07:57.118640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.919 [2024-11-06 14:07:57.122095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.919 [2024-11-06 14:07:57.122114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.919 [2024-11-06 14:07:57.122120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.919 [2024-11-06 14:07:57.125479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.919 [2024-11-06 14:07:57.125498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.919 [2024-11-06 14:07:57.125504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.919 [2024-11-06 14:07:57.128909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.919 [2024-11-06 14:07:57.128928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.919 [2024-11-06 14:07:57.128935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.919 [2024-11-06 14:07:57.133082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.919 [2024-11-06 14:07:57.133101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.919 [2024-11-06 14:07:57.133108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.919 [2024-11-06 14:07:57.139143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.919 [2024-11-06 14:07:57.139162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.919 [2024-11-06 14:07:57.139169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.919 [2024-11-06 14:07:57.146884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.919 [2024-11-06 14:07:57.146902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.919 [2024-11-06 14:07:57.146908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.919 [2024-11-06 14:07:57.153119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.919 [2024-11-06 14:07:57.153138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.919 [2024-11-06 14:07:57.153144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.919 [2024-11-06 14:07:57.156716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.919 [2024-11-06 14:07:57.156734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.919 [2024-11-06 14:07:57.156741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.919 [2024-11-06 14:07:57.160127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.919 [2024-11-06 14:07:57.160145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.919 [2024-11-06 14:07:57.160155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.919 [2024-11-06 14:07:57.163636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.919 [2024-11-06 14:07:57.163654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.919 [2024-11-06 14:07:57.163661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.919 [2024-11-06 14:07:57.168265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.919 [2024-11-06 14:07:57.168284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.919 [2024-11-06 14:07:57.168291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.919 [2024-11-06 14:07:57.173164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.919 [2024-11-06 14:07:57.173182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.919 [2024-11-06 14:07:57.173188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.919 [2024-11-06 14:07:57.176820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.919 [2024-11-06 14:07:57.176838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.919 [2024-11-06 14:07:57.176844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.919 [2024-11-06 14:07:57.180118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.919 [2024-11-06 14:07:57.180136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.919 [2024-11-06 14:07:57.180142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.919 [2024-11-06 14:07:57.186308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.919 [2024-11-06 14:07:57.186327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.919 [2024-11-06 14:07:57.186334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.919 [2024-11-06 14:07:57.192745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.919 [2024-11-06 14:07:57.192763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.919 [2024-11-06 14:07:57.192769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.919 [2024-11-06 14:07:57.196968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:17.919 [2024-11-06 14:07:57.196986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.919 [2024-11-06 14:07:57.196992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.180 [2024-11-06 14:07:57.201209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.180 [2024-11-06 14:07:57.201231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.180 [2024-11-06 14:07:57.201237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.180 [2024-11-06 14:07:57.206610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.180 [2024-11-06 14:07:57.206629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.180 [2024-11-06 14:07:57.206635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.180 [2024-11-06 14:07:57.213147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.180 [2024-11-06 14:07:57.213165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.180 [2024-11-06 14:07:57.213172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.180 [2024-11-06 14:07:57.216501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.180 [2024-11-06 14:07:57.216519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.180 [2024-11-06 14:07:57.216526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.180 [2024-11-06 14:07:57.219874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.180 [2024-11-06 14:07:57.219891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.181 [2024-11-06 14:07:57.219898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.181 [2024-11-06 14:07:57.223214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.181 [2024-11-06 14:07:57.223232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.181 [2024-11-06 14:07:57.223239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.181 [2024-11-06 14:07:57.226870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.181 [2024-11-06 14:07:57.226888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.181 [2024-11-06 14:07:57.226895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.181 [2024-11-06 14:07:57.234533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.181 [2024-11-06 14:07:57.234550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.181 [2024-11-06 14:07:57.234556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.181 [2024-11-06 14:07:57.238090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.181 [2024-11-06 14:07:57.238110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.181 [2024-11-06 14:07:57.238116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.181 [2024-11-06 14:07:57.241666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.181 [2024-11-06 14:07:57.241685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.181 [2024-11-06 14:07:57.241691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.181 [2024-11-06 14:07:57.245880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.181 [2024-11-06 14:07:57.245898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.181 [2024-11-06 14:07:57.245904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.181 [2024-11-06 14:07:57.252059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.181 [2024-11-06 14:07:57.252077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.181 [2024-11-06 14:07:57.252084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.181 [2024-11-06 14:07:57.258814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.181 [2024-11-06 14:07:57.258833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.181 [2024-11-06 14:07:57.258839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.181 [2024-11-06 14:07:57.266699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.181 [2024-11-06 14:07:57.266717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.181 [2024-11-06 14:07:57.266723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.181 [2024-11-06 14:07:57.275296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.181 [2024-11-06 14:07:57.275314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.181 [2024-11-06 14:07:57.275320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.181 [2024-11-06 14:07:57.282935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.181 [2024-11-06 14:07:57.282953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.181 [2024-11-06 14:07:57.282959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.181 [2024-11-06 14:07:57.290826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.181 [2024-11-06 14:07:57.290844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.181 [2024-11-06 14:07:57.290850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.181 [2024-11-06 14:07:57.299497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.181 [2024-11-06 14:07:57.299518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.181 [2024-11-06 14:07:57.299524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.181 [2024-11-06 14:07:57.308282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.181 [2024-11-06 14:07:57.308300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.181 [2024-11-06 14:07:57.308306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.181 [2024-11-06 14:07:57.316671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.181 [2024-11-06 14:07:57.316689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.181 [2024-11-06 14:07:57.316696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.181 [2024-11-06 14:07:57.324579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.181 [2024-11-06 14:07:57.324598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.181 [2024-11-06 14:07:57.324604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.181 [2024-11-06 14:07:57.332923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.181 [2024-11-06 14:07:57.332941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.181 [2024-11-06 14:07:57.332948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.181 [2024-11-06 14:07:57.341456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.181 [2024-11-06 14:07:57.341475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.181 [2024-11-06 14:07:57.341481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.181 [2024-11-06 14:07:57.349209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.181 [2024-11-06 14:07:57.349227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.181 [2024-11-06 14:07:57.349233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.181 [2024-11-06 14:07:57.354974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.181 [2024-11-06 14:07:57.354992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.181 [2024-11-06 14:07:57.355003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.181 [2024-11-06 14:07:57.361545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.181 [2024-11-06 14:07:57.361563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.181 [2024-11-06 14:07:57.361569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.181 [2024-11-06 14:07:57.367759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.181 [2024-11-06 14:07:57.367778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.181 [2024-11-06 14:07:57.367785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.181 [2024-11-06 14:07:57.374911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.181 [2024-11-06 14:07:57.374929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.181 [2024-11-06 14:07:57.374935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.181 [2024-11-06 14:07:57.380724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.181 [2024-11-06 14:07:57.380741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.181 [2024-11-06 14:07:57.380759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.181 [2024-11-06 14:07:57.387863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.181 [2024-11-06 14:07:57.387882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.181 [2024-11-06 14:07:57.387888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.181 [2024-11-06 14:07:57.395049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.181 [2024-11-06 14:07:57.395067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.181 [2024-11-06 14:07:57.395073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.181 [2024-11-06 14:07:57.399682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.182 [2024-11-06 14:07:57.399699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.182 [2024-11-06 14:07:57.399706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.182 [2024-11-06 14:07:57.407638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.182 [2024-11-06 14:07:57.407655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.182 [2024-11-06 14:07:57.407662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.182 [2024-11-06 14:07:57.416485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.182 [2024-11-06 14:07:57.416502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.182 [2024-11-06 14:07:57.416509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.182 [2024-11-06 14:07:57.421700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.182 [2024-11-06 14:07:57.421717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.182 [2024-11-06 14:07:57.421727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.182 [2024-11-06 14:07:57.426085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.182 [2024-11-06 14:07:57.426102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.182 [2024-11-06 14:07:57.426108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.182 [2024-11-06 14:07:57.431122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.182 [2024-11-06 14:07:57.431138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.182 [2024-11-06 14:07:57.431145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.182 [2024-11-06 14:07:57.436585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.182 [2024-11-06 14:07:57.436602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.182 [2024-11-06 14:07:57.436608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.182 [2024-11-06 14:07:57.442196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.182 [2024-11-06 14:07:57.442213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.182 [2024-11-06 14:07:57.442220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.182 [2024-11-06 14:07:57.448647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.182 [2024-11-06 14:07:57.448664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.182 [2024-11-06 14:07:57.448671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.182 [2024-11-06 14:07:57.453473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.182 [2024-11-06 14:07:57.453490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.182 [2024-11-06 14:07:57.453497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.182 [2024-11-06 14:07:57.458380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.182 [2024-11-06 14:07:57.458396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.182 [2024-11-06 14:07:57.458402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.182 [2024-11-06 14:07:57.463168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.182 [2024-11-06 14:07:57.463185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.182 [2024-11-06 14:07:57.463192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.444 [2024-11-06 14:07:57.466469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.444 [2024-11-06 14:07:57.466490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.444 [2024-11-06 14:07:57.466496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.444 [2024-11-06 14:07:57.469837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.444 [2024-11-06 14:07:57.469854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.444 [2024-11-06 14:07:57.469861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.444 [2024-11-06 14:07:57.473940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.444 [2024-11-06 14:07:57.473957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.444 [2024-11-06 14:07:57.473964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.444 [2024-11-06 14:07:57.479278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.444 [2024-11-06 14:07:57.479296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.444 [2024-11-06 14:07:57.479302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.444 [2024-11-06 14:07:57.485071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.444 [2024-11-06 14:07:57.485089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.444 [2024-11-06 14:07:57.485096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.444 [2024-11-06 14:07:57.491225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.444 [2024-11-06 14:07:57.491248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.444 [2024-11-06 14:07:57.491255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.444 [2024-11-06 14:07:57.498590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.444 [2024-11-06 14:07:57.498609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.444 [2024-11-06 14:07:57.498616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.444 [2024-11-06 14:07:57.504366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.444 [2024-11-06 14:07:57.504385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.444 [2024-11-06 14:07:57.504391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.444 [2024-11-06 14:07:57.509705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.444 [2024-11-06 14:07:57.509722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.444 [2024-11-06 14:07:57.509729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.444 [2024-11-06 14:07:57.514250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.444 [2024-11-06 14:07:57.514267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.444 [2024-11-06 14:07:57.514273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.444 [2024-11-06 14:07:57.518718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.444 [2024-11-06 14:07:57.518736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.444 [2024-11-06 14:07:57.518742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.444 [2024-11-06 14:07:57.523416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.444 [2024-11-06 14:07:57.523435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.444 [2024-11-06 14:07:57.523441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.444 [2024-11-06 14:07:57.529000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.444 [2024-11-06 14:07:57.529018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.444 [2024-11-06 14:07:57.529025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.444 [2024-11-06 14:07:57.534980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.444 [2024-11-06 14:07:57.534998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.444 [2024-11-06 14:07:57.535004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.444 [2024-11-06 14:07:57.539527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.444 [2024-11-06 14:07:57.539546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.444 [2024-11-06 14:07:57.539552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.444 [2024-11-06 14:07:57.545374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.444 [2024-11-06 14:07:57.545392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.444 [2024-11-06 14:07:57.545398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.444 [2024-11-06 14:07:57.550774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.444 [2024-11-06 14:07:57.550792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.444 [2024-11-06 14:07:57.550798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.444 [2024-11-06 14:07:57.556087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.444 [2024-11-06 14:07:57.556105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.444 [2024-11-06 14:07:57.556114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.444 [2024-11-06 14:07:57.561869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.444 [2024-11-06 14:07:57.561887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.444 [2024-11-06 14:07:57.561894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.444 [2024-11-06 14:07:57.567190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.444 [2024-11-06 14:07:57.567208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.444 [2024-11-06 14:07:57.567214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.444 [2024-11-06 14:07:57.572573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.444 [2024-11-06 14:07:57.572591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.444 [2024-11-06 14:07:57.572597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.444 [2024-11-06 14:07:57.578327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.444 [2024-11-06 14:07:57.578346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.444 [2024-11-06 14:07:57.578352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.444 [2024-11-06 14:07:57.584523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.444 [2024-11-06 14:07:57.584541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.444 [2024-11-06 14:07:57.584548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.444 [2024-11-06 14:07:57.589273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.444 [2024-11-06 14:07:57.589291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.444 [2024-11-06 14:07:57.589297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.445 [2024-11-06 14:07:57.594506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.445 [2024-11-06 14:07:57.594525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-11-06 14:07:57.594531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.445 [2024-11-06 14:07:57.598258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.445 [2024-11-06 14:07:57.598276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-11-06 14:07:57.598282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.445 [2024-11-06 14:07:57.605135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.445 [2024-11-06 14:07:57.605157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-11-06 14:07:57.605163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.445 [2024-11-06 14:07:57.609314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.445 [2024-11-06 14:07:57.609333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-11-06 14:07:57.609339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.445 [2024-11-06 14:07:57.613063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.445 [2024-11-06 14:07:57.613082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-11-06 14:07:57.613088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.445 [2024-11-06 14:07:57.616988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.445 [2024-11-06 14:07:57.617006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-11-06 14:07:57.617013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.445 [2024-11-06 14:07:57.620999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.445 [2024-11-06 14:07:57.621017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-11-06 14:07:57.621024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.445 [2024-11-06 14:07:57.624516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.445 [2024-11-06 14:07:57.624534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-11-06 14:07:57.624541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.445 [2024-11-06 14:07:57.627593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.445 [2024-11-06 14:07:57.627610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-11-06 14:07:57.627617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.445 [2024-11-06 14:07:57.632435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.445 [2024-11-06 14:07:57.632453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-11-06 14:07:57.632459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.445 [2024-11-06 14:07:57.638896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.445 [2024-11-06 14:07:57.638914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-11-06 14:07:57.638920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.445 [2024-11-06 14:07:57.644080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.445 [2024-11-06 14:07:57.644099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-11-06 14:07:57.644105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.445 [2024-11-06 14:07:57.649872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.445 [2024-11-06 14:07:57.649889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-11-06 14:07:57.649896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.445 [2024-11-06 14:07:57.654726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.445 [2024-11-06 14:07:57.654743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-11-06 14:07:57.654749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.445 [2024-11-06 14:07:57.659287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.445 [2024-11-06 14:07:57.659305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-11-06 14:07:57.659312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.445 [2024-11-06 14:07:57.664427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.445 [2024-11-06 14:07:57.664445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-11-06 14:07:57.664451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.445 [2024-11-06 14:07:57.669989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.445 [2024-11-06 14:07:57.670008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-11-06 14:07:57.670015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.445 [2024-11-06 14:07:57.675893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.445 [2024-11-06 14:07:57.675911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-11-06 14:07:57.675917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.445 [2024-11-06 14:07:57.681406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.445 [2024-11-06 14:07:57.681424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-11-06 14:07:57.681430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.445 [2024-11-06 14:07:57.686140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.445 [2024-11-06 14:07:57.686158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-11-06 14:07:57.686170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.445 [2024-11-06 14:07:57.690817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.445 [2024-11-06 14:07:57.690836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-11-06 14:07:57.690842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.445 [2024-11-06 14:07:57.695774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.445 [2024-11-06 14:07:57.695791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-11-06 14:07:57.695797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.445 [2024-11-06 14:07:57.700711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.445 [2024-11-06 14:07:57.700729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-11-06 14:07:57.700736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.445 [2024-11-06 14:07:57.705462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.445 [2024-11-06 14:07:57.705481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.445 [2024-11-06 14:07:57.705487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.446 [2024-11-06 14:07:57.710097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.446 [2024-11-06 14:07:57.710115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.446 [2024-11-06 14:07:57.710121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.446 [2024-11-06 14:07:57.715334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.446 [2024-11-06 14:07:57.715352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.446 [2024-11-06 14:07:57.715359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.446 [2024-11-06 14:07:57.720139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.446 [2024-11-06 14:07:57.720156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.446 [2024-11-06 14:07:57.720162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.446 [2024-11-06 14:07:57.724491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.446 [2024-11-06 14:07:57.724509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.446 [2024-11-06 14:07:57.724515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.706 [2024-11-06 14:07:57.729566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.706 [2024-11-06 14:07:57.729585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.706 [2024-11-06 14:07:57.729591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.706 [2024-11-06 14:07:57.734407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.706 [2024-11-06 14:07:57.734426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.706 [2024-11-06 14:07:57.734433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.706 [2024-11-06 14:07:57.742638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.706 [2024-11-06 14:07:57.742657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.706 [2024-11-06 14:07:57.742664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.706 [2024-11-06 14:07:57.748482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.706 [2024-11-06 14:07:57.748499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.706 [2024-11-06 14:07:57.748506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.706 [2024-11-06 14:07:57.751690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.706 [2024-11-06 14:07:57.751706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.706 [2024-11-06 14:07:57.751713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.706 [2024-11-06 14:07:57.756484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.706 [2024-11-06 14:07:57.756500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.706 [2024-11-06 14:07:57.756507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.706 [2024-11-06 14:07:57.762438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.706 [2024-11-06 14:07:57.762455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.706 [2024-11-06 14:07:57.762461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.706 [2024-11-06 14:07:57.767388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.706 [2024-11-06 14:07:57.767405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.706 [2024-11-06 14:07:57.767411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.706 [2024-11-06 14:07:57.772774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.706 [2024-11-06 14:07:57.772791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.706 [2024-11-06 14:07:57.772801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.706 [2024-11-06 14:07:57.778515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.706 [2024-11-06 14:07:57.778532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.706 [2024-11-06 14:07:57.778538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.706 [2024-11-06 14:07:57.784101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.706 [2024-11-06 14:07:57.784117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.706 [2024-11-06 14:07:57.784123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.706 [2024-11-06 14:07:57.789091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.706 [2024-11-06 14:07:57.789109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.707 [2024-11-06 14:07:57.789115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.707 [2024-11-06 14:07:57.794550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.707 [2024-11-06 14:07:57.794568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.707 [2024-11-06 14:07:57.794574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.707 [2024-11-06 14:07:57.800782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.707 [2024-11-06 14:07:57.800799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.707 [2024-11-06 14:07:57.800805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.707 [2024-11-06 14:07:57.806595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.707 [2024-11-06 14:07:57.806613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.707 [2024-11-06 14:07:57.806620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.707 [2024-11-06 14:07:57.811115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.707 [2024-11-06 14:07:57.811133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.707 [2024-11-06 14:07:57.811140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.707 [2024-11-06 14:07:57.816509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.707 [2024-11-06 14:07:57.816527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.707 [2024-11-06 14:07:57.816533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.707 [2024-11-06 14:07:57.820780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.707 [2024-11-06 14:07:57.820801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.707 [2024-11-06 14:07:57.820807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.707 [2024-11-06 14:07:57.825913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.707 [2024-11-06 14:07:57.825932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.707 [2024-11-06 14:07:57.825938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.707 [2024-11-06 14:07:57.830711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.707 [2024-11-06 14:07:57.830730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.707 [2024-11-06 14:07:57.830754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.707 [2024-11-06 14:07:57.835556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.707 [2024-11-06 14:07:57.835575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.707 [2024-11-06 14:07:57.835581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.707 [2024-11-06 14:07:57.839952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.707 [2024-11-06 14:07:57.839969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.707 [2024-11-06 14:07:57.839976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.707 [2024-11-06 14:07:57.843161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.707 [2024-11-06 14:07:57.843177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.707 [2024-11-06 14:07:57.843184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.707 [2024-11-06 14:07:57.847940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.707 [2024-11-06 14:07:57.847957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.707 [2024-11-06 14:07:57.847963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.707 [2024-11-06 14:07:57.852927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.707 [2024-11-06 14:07:57.852944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.707 [2024-11-06 14:07:57.852950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.707 [2024-11-06 14:07:57.859341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.707 [2024-11-06 14:07:57.859365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.707 [2024-11-06 14:07:57.859372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.707 [2024-11-06 14:07:57.864555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.707 [2024-11-06 14:07:57.864572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.707 [2024-11-06 14:07:57.864578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.707 [2024-11-06 14:07:57.872110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.707 [2024-11-06 14:07:57.872128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.707 [2024-11-06 14:07:57.872134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.707 [2024-11-06 14:07:57.878729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.707 [2024-11-06 14:07:57.878746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.707 [2024-11-06 14:07:57.878752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.707 [2024-11-06 14:07:57.883561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.707 [2024-11-06 14:07:57.883579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.707 [2024-11-06 14:07:57.883585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.707 [2024-11-06 14:07:57.888591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.707 [2024-11-06 14:07:57.888610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.707 [2024-11-06 14:07:57.888616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.707 [2024-11-06 14:07:57.893468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.707 [2024-11-06 14:07:57.893485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.707 [2024-11-06 14:07:57.893491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.707 [2024-11-06 14:07:57.898229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.707 [2024-11-06 14:07:57.898252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.707 [2024-11-06 14:07:57.898259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.707 [2024-11-06 14:07:57.903883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.707 [2024-11-06 14:07:57.903902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.707 [2024-11-06 14:07:57.903908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.707 [2024-11-06 14:07:57.909189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.707 [2024-11-06 14:07:57.909208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.707 [2024-11-06 14:07:57.909218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.707 [2024-11-06 14:07:57.914117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.707 [2024-11-06 14:07:57.914135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.707 [2024-11-06 14:07:57.914142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.707 [2024-11-06 14:07:57.920367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.707 [2024-11-06 14:07:57.920386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.707 [2024-11-06 14:07:57.920392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.707 [2024-11-06 14:07:57.925499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.707 [2024-11-06 14:07:57.925517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.708 [2024-11-06 14:07:57.925523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.708 [2024-11-06 14:07:57.930665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.708 [2024-11-06 14:07:57.930682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.708 [2024-11-06 14:07:57.930688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.708 [2024-11-06 14:07:57.933999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.708 [2024-11-06 14:07:57.934016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.708 [2024-11-06 14:07:57.934023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.708 [2024-11-06 14:07:57.940846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.708 [2024-11-06 14:07:57.940863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.708 [2024-11-06 14:07:57.940869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.708 [2024-11-06 14:07:57.946685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.708 [2024-11-06 14:07:57.946704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.708 [2024-11-06 14:07:57.946710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.708 [2024-11-06 14:07:57.952927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.708 [2024-11-06 14:07:57.952945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.708 [2024-11-06 14:07:57.952952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.708 [2024-11-06 14:07:57.959307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.708 [2024-11-06 14:07:57.959328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.708 [2024-11-06 14:07:57.959335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.708 5723.00 IOPS, 715.38 MiB/s [2024-11-06T13:07:57.992Z] [2024-11-06 14:07:57.965488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.708 [2024-11-06 14:07:57.965506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.708 [2024-11-06 14:07:57.965512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.708 [2024-11-06 14:07:57.971167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.708 [2024-11-06 14:07:57.971185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.708 [2024-11-06 14:07:57.971192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.708 [2024-11-06 14:07:57.976419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.708 [2024-11-06 14:07:57.976437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.708 [2024-11-06 14:07:57.976443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.708 [2024-11-06 14:07:57.979961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.708 [2024-11-06 14:07:57.979979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.708 [2024-11-06 14:07:57.979986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.708 [2024-11-06 14:07:57.983629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.708 [2024-11-06 14:07:57.983648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.708 [2024-11-06 14:07:57.983654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.968 [2024-11-06 14:07:57.989771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.968 [2024-11-06 14:07:57.989789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.968 [2024-11-06 14:07:57.989795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.968 [2024-11-06 14:07:57.995318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.968 [2024-11-06 14:07:57.995336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.968 [2024-11-06 14:07:57.995343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.968 [2024-11-06 14:07:57.999254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.968 [2024-11-06 14:07:57.999272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.968 [2024-11-06 14:07:57.999283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.968 [2024-11-06 14:07:58.003753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.968 [2024-11-06 14:07:58.003771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.968 [2024-11-06 14:07:58.003778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.968 [2024-11-06 14:07:58.009224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.968 [2024-11-06 14:07:58.009248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.968 [2024-11-06 14:07:58.009255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.968 [2024-11-06 14:07:58.013029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.968 [2024-11-06 14:07:58.013047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.968 [2024-11-06 14:07:58.013054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.968 [2024-11-06 14:07:58.017969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.968 [2024-11-06 14:07:58.017987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.968 [2024-11-06 14:07:58.017994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.968 [2024-11-06 14:07:58.024472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.968 [2024-11-06 14:07:58.024490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.968 [2024-11-06 14:07:58.024497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.968 [2024-11-06 14:07:58.031056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.968 [2024-11-06 14:07:58.031075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.968 [2024-11-06 14:07:58.031081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.968 [2024-11-06 14:07:58.037131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.968 [2024-11-06 14:07:58.037149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.968 [2024-11-06 14:07:58.037155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.968 [2024-11-06 14:07:58.043430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.968 [2024-11-06 14:07:58.043448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.968 [2024-11-06 14:07:58.043454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.968 [2024-11-06 14:07:58.049007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.968 [2024-11-06 14:07:58.049029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.968 [2024-11-06 14:07:58.049035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.968 [2024-11-06 14:07:58.055764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.968 [2024-11-06 14:07:58.055783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.969 [2024-11-06 14:07:58.055789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.969 [2024-11-06 14:07:58.062333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.969 [2024-11-06 14:07:58.062351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.969 [2024-11-06 14:07:58.062357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.969 [2024-11-06 14:07:58.069095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.969 [2024-11-06 14:07:58.069113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.969 [2024-11-06 14:07:58.069120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.969 [2024-11-06 14:07:58.075104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.969 [2024-11-06 14:07:58.075124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.969 [2024-11-06 14:07:58.075130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.969 [2024-11-06 14:07:58.080744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.969 [2024-11-06 14:07:58.080762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.969 [2024-11-06 14:07:58.080769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.969 [2024-11-06 14:07:58.086037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.969 [2024-11-06 14:07:58.086055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.969 [2024-11-06 14:07:58.086061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.969 [2024-11-06 14:07:58.091109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.969 [2024-11-06 14:07:58.091127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.969 [2024-11-06 14:07:58.091133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.969 [2024-11-06 14:07:58.095192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.969 [2024-11-06 14:07:58.095209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.969 [2024-11-06 14:07:58.095216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.969 [2024-11-06 14:07:58.099803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.969 [2024-11-06 14:07:58.099821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.969 [2024-11-06 14:07:58.099827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.969 [2024-11-06 14:07:58.105546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.969 [2024-11-06 14:07:58.105564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.969 [2024-11-06 14:07:58.105570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.969 [2024-11-06 14:07:58.111678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.969 [2024-11-06 14:07:58.111697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.969 [2024-11-06 14:07:58.111703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.969 [2024-11-06 14:07:58.117990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.969 [2024-11-06 14:07:58.118007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.969 [2024-11-06 14:07:58.118014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.969 [2024-11-06 14:07:58.123428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.969 [2024-11-06 14:07:58.123446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.969 [2024-11-06 14:07:58.123453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.969 [2024-11-06 14:07:58.128890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.969 [2024-11-06 14:07:58.128908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.969 [2024-11-06 14:07:58.128915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.969 [2024-11-06 14:07:58.134056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.969 [2024-11-06 14:07:58.134074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.969 [2024-11-06 14:07:58.134080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.969 [2024-11-06 14:07:58.139604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.969 [2024-11-06 14:07:58.139622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.969 [2024-11-06 14:07:58.139629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.969 [2024-11-06 14:07:58.146874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.969 [2024-11-06 14:07:58.146892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.969 [2024-11-06 14:07:58.146902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.969 [2024-11-06 14:07:58.153882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.969 [2024-11-06 14:07:58.153901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.969 [2024-11-06 14:07:58.153907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.969 [2024-11-06 14:07:58.159469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.969 [2024-11-06 14:07:58.159488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.969 [2024-11-06 14:07:58.159494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.969 [2024-11-06 14:07:58.165001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.969 [2024-11-06 14:07:58.165019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.969 [2024-11-06 14:07:58.165026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.969 [2024-11-06 14:07:58.170589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.969 [2024-11-06 14:07:58.170606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.969 [2024-11-06 14:07:58.170613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.969 [2024-11-06 14:07:58.175721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.969 [2024-11-06 14:07:58.175739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.969 [2024-11-06 14:07:58.175746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.969 [2024-11-06 14:07:58.181177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.969 [2024-11-06 14:07:58.181195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.969 [2024-11-06 14:07:58.181201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.969 [2024-11-06 14:07:58.186175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.969 [2024-11-06 14:07:58.186192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.969 [2024-11-06 14:07:58.186198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.969 [2024-11-06 14:07:58.191012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.969 [2024-11-06 14:07:58.191031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.969 [2024-11-06 14:07:58.191037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.969 [2024-11-06 14:07:58.196989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.970 [2024-11-06 14:07:58.197011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.970 [2024-11-06 14:07:58.197017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.970 [2024-11-06 14:07:58.202512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.970 [2024-11-06 14:07:58.202530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.970 [2024-11-06 14:07:58.202537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.970 [2024-11-06 14:07:58.207623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.970 [2024-11-06 14:07:58.207641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.970 [2024-11-06 14:07:58.207647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.970 [2024-11-06 14:07:58.211834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.970 [2024-11-06 14:07:58.211852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.970 [2024-11-06 14:07:58.211858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.970 [2024-11-06 14:07:58.215550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.970 [2024-11-06 14:07:58.215568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.970 [2024-11-06 14:07:58.215575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.970 [2024-11-06 14:07:58.219636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.970 [2024-11-06 14:07:58.219654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.970 [2024-11-06 14:07:58.219660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.970 [2024-11-06 14:07:58.225782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.970 [2024-11-06 14:07:58.225801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.970 [2024-11-06 14:07:58.225808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.970 [2024-11-06 14:07:58.230857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.970 [2024-11-06 14:07:58.230874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.970 [2024-11-06 14:07:58.230881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.970 [2024-11-06 14:07:58.235663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.970 [2024-11-06 14:07:58.235681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.970 [2024-11-06 14:07:58.235688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.970 [2024-11-06 14:07:58.241432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.970 [2024-11-06 14:07:58.241450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.970 [2024-11-06 14:07:58.241456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.970 [2024-11-06 14:07:58.246990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:18.970 [2024-11-06 14:07:58.247009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.970 [2024-11-06 14:07:58.247017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.231 [2024-11-06 14:07:58.254866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.231 [2024-11-06 14:07:58.254886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.231 [2024-11-06 14:07:58.254893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.231 [2024-11-06 14:07:58.260274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.231 [2024-11-06 14:07:58.260292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.231 [2024-11-06 14:07:58.260299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.231 [2024-11-06 14:07:58.263113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.231 [2024-11-06 14:07:58.263130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.231 [2024-11-06 14:07:58.263136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.231 [2024-11-06 14:07:58.268693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.231 [2024-11-06 14:07:58.268712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.231 [2024-11-06 14:07:58.268718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.231 [2024-11-06 14:07:58.273979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.231 [2024-11-06 14:07:58.273998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.231 [2024-11-06 14:07:58.274004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.231 [2024-11-06 14:07:58.278849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.231 [2024-11-06 14:07:58.278877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.231 [2024-11-06 14:07:58.278884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.231 [2024-11-06 14:07:58.284286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.231 [2024-11-06 14:07:58.284308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.231 [2024-11-06 14:07:58.284314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.231 [2024-11-06 14:07:58.290036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.231 [2024-11-06 14:07:58.290054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.231 [2024-11-06 14:07:58.290060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.231 [2024-11-06 14:07:58.295495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.231 [2024-11-06 14:07:58.295513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.231 [2024-11-06 14:07:58.295519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.231 [2024-11-06 14:07:58.300123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.231 [2024-11-06 14:07:58.300142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.231 [2024-11-06 14:07:58.300148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.231 [2024-11-06 14:07:58.305626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.231 [2024-11-06 14:07:58.305644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.231 [2024-11-06 14:07:58.305651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.231 [2024-11-06 14:07:58.311020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.231 [2024-11-06 14:07:58.311038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.231 [2024-11-06 14:07:58.311045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.231 [2024-11-06 14:07:58.316126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.231 [2024-11-06 14:07:58.316144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.231 [2024-11-06 14:07:58.316151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.231 [2024-11-06 14:07:58.321437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.231 [2024-11-06 14:07:58.321456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.231 [2024-11-06 14:07:58.321462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.231 [2024-11-06 14:07:58.326799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.231 [2024-11-06 14:07:58.326817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.231 [2024-11-06 14:07:58.326824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.231 [2024-11-06 14:07:58.331896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.231 [2024-11-06 14:07:58.331913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.231 [2024-11-06 14:07:58.331920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.231 [2024-11-06 14:07:58.336971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.231 [2024-11-06 14:07:58.336989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.231 [2024-11-06 14:07:58.336995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.231 [2024-11-06 14:07:58.341579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.231 [2024-11-06 14:07:58.341597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.231 [2024-11-06 14:07:58.341604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.231 [2024-11-06 14:07:58.347158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.231 [2024-11-06 14:07:58.347176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.231 [2024-11-06 14:07:58.347183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.231 [2024-11-06 14:07:58.351758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.231 [2024-11-06 14:07:58.351777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.231 [2024-11-06 14:07:58.351784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.231 [2024-11-06 14:07:58.356818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.231 [2024-11-06 14:07:58.356837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.231 [2024-11-06 14:07:58.356843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.231 [2024-11-06 14:07:58.364441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.231 [2024-11-06 14:07:58.364459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.231 [2024-11-06 14:07:58.364466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.231 [2024-11-06 14:07:58.370384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.231 [2024-11-06 14:07:58.370402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.231 [2024-11-06 14:07:58.370408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.231 [2024-11-06 14:07:58.373518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.231 [2024-11-06 14:07:58.373536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.231 [2024-11-06 14:07:58.373548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.231 [2024-11-06 14:07:58.377901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.231 [2024-11-06 14:07:58.377919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.231 [2024-11-06 14:07:58.377925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.232 [2024-11-06 14:07:58.383060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.232 [2024-11-06 14:07:58.383078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.232 [2024-11-06 14:07:58.383084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.232 [2024-11-06 14:07:58.387754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.232 [2024-11-06 14:07:58.387771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.232 [2024-11-06 14:07:58.387778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.232 [2024-11-06 14:07:58.392845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.232 [2024-11-06 14:07:58.392862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.232 [2024-11-06 14:07:58.392868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.232 [2024-11-06 14:07:58.397152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.232 [2024-11-06 14:07:58.397170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.232 [2024-11-06 14:07:58.397177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.232 [2024-11-06 14:07:58.401055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.232 [2024-11-06 14:07:58.401073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.232 [2024-11-06 14:07:58.401079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.232 [2024-11-06 14:07:58.404646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.232 [2024-11-06 14:07:58.404664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.232 [2024-11-06 14:07:58.404670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.232 [2024-11-06 14:07:58.409708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.232 [2024-11-06 14:07:58.409725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.232 [2024-11-06 14:07:58.409732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.232 [2024-11-06 14:07:58.412998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.232 [2024-11-06 14:07:58.413020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.232 [2024-11-06 14:07:58.413026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.232 [2024-11-06 14:07:58.416354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.232 [2024-11-06 14:07:58.416371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.232 [2024-11-06 14:07:58.416378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.232 [2024-11-06 14:07:58.421561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.232 [2024-11-06 14:07:58.421579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.232 [2024-11-06 14:07:58.421585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.232 [2024-11-06 14:07:58.427966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.232 [2024-11-06 14:07:58.427985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.232 [2024-11-06 14:07:58.427991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.232 [2024-11-06 14:07:58.435749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.232 [2024-11-06 14:07:58.435768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.232 [2024-11-06 14:07:58.435774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.232 [2024-11-06 14:07:58.441967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.232 [2024-11-06 14:07:58.441985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.232 [2024-11-06 14:07:58.441991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.232 [2024-11-06 14:07:58.447468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.232 [2024-11-06 14:07:58.447485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.232 [2024-11-06 14:07:58.447491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.232 [2024-11-06 14:07:58.453032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.232 [2024-11-06 14:07:58.453050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.232 [2024-11-06 14:07:58.453056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.232 [2024-11-06 14:07:58.458394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.232 [2024-11-06 14:07:58.458412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.232 [2024-11-06 14:07:58.458418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.232 [2024-11-06 14:07:58.463468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.232 [2024-11-06 14:07:58.463485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.232 [2024-11-06 14:07:58.463491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.232 [2024-11-06 14:07:58.468582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.232 [2024-11-06 14:07:58.468600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.232 [2024-11-06 14:07:58.468607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.232 [2024-11-06 14:07:58.476241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.232 [2024-11-06 14:07:58.476265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.232 [2024-11-06 14:07:58.476272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.232 [2024-11-06 14:07:58.482604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.232 [2024-11-06 14:07:58.482621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.232 [2024-11-06 14:07:58.482627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.232 [2024-11-06 14:07:58.488597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.232 [2024-11-06 14:07:58.488615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.232 [2024-11-06 14:07:58.488622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.232 [2024-11-06 14:07:58.493817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.232 [2024-11-06 14:07:58.493834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.232 [2024-11-06 14:07:58.493841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.232 [2024-11-06 14:07:58.499550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.232 [2024-11-06 14:07:58.499568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.232 [2024-11-06 14:07:58.499575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.232 [2024-11-06 14:07:58.504979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.232 [2024-11-06 14:07:58.504998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.232 [2024-11-06 14:07:58.505004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.232 [2024-11-06 14:07:58.511036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.232 [2024-11-06 14:07:58.511054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.232 [2024-11-06 14:07:58.511064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.493 [2024-11-06 14:07:58.516461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.493 [2024-11-06 14:07:58.516479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.493 [2024-11-06 14:07:58.516485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.493 [2024-11-06 14:07:58.521893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.493 [2024-11-06 14:07:58.521911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.493 [2024-11-06 14:07:58.521917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.493 [2024-11-06 14:07:58.526886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.493 [2024-11-06 14:07:58.526904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.493 [2024-11-06 14:07:58.526911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.493 [2024-11-06 14:07:58.532295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.493 [2024-11-06 14:07:58.532313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.493 [2024-11-06 14:07:58.532319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.493 [2024-11-06 14:07:58.538647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.493 [2024-11-06 14:07:58.538666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.493 [2024-11-06 14:07:58.538673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.493 [2024-11-06 14:07:58.543155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.493 [2024-11-06 14:07:58.543174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.493 [2024-11-06 14:07:58.543181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.493 [2024-11-06 14:07:58.548178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.493 [2024-11-06 14:07:58.548197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.493 [2024-11-06 14:07:58.548204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.493 [2024-11-06 14:07:58.552990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.493 [2024-11-06 14:07:58.553008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.493 [2024-11-06 14:07:58.553014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.493 [2024-11-06 14:07:58.557335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.493 [2024-11-06 14:07:58.557354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.493 [2024-11-06 14:07:58.557360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.493 [2024-11-06 14:07:58.561882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.493 [2024-11-06 14:07:58.561900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.493 [2024-11-06 14:07:58.561906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.493 [2024-11-06 14:07:58.566980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.493 [2024-11-06 14:07:58.566998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.493 [2024-11-06 14:07:58.567004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.493 [2024-11-06 14:07:58.574451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.493 [2024-11-06 14:07:58.574469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.493 [2024-11-06 14:07:58.574475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.493 [2024-11-06 14:07:58.579683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.493 [2024-11-06 14:07:58.579701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.493 [2024-11-06 14:07:58.579707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.493 [2024-11-06 14:07:58.585272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.493 [2024-11-06 14:07:58.585290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.493 [2024-11-06 14:07:58.585297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.493 [2024-11-06 14:07:58.590292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.493 [2024-11-06 14:07:58.590311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.493 [2024-11-06 14:07:58.590317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.493 [2024-11-06 14:07:58.595905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.493 [2024-11-06 14:07:58.595922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.493 [2024-11-06 14:07:58.595929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.493 [2024-11-06 14:07:58.598887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.493 [2024-11-06 14:07:58.598905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.493 [2024-11-06 14:07:58.598914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.493 [2024-11-06 14:07:58.603201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.493 [2024-11-06 14:07:58.603218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.493 [2024-11-06 14:07:58.603225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.493 [2024-11-06 14:07:58.608270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.493 [2024-11-06 14:07:58.608289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.493 [2024-11-06 14:07:58.608295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.493 [2024-11-06 14:07:58.613912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.494 [2024-11-06 14:07:58.613930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.494 [2024-11-06 14:07:58.613936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.494 [2024-11-06 14:07:58.619442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.494 [2024-11-06 14:07:58.619459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.494 [2024-11-06 14:07:58.619465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.494 [2024-11-06 14:07:58.625000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.494 [2024-11-06 14:07:58.625019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.494 [2024-11-06 14:07:58.625025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.494 [2024-11-06 14:07:58.630235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.494 [2024-11-06 14:07:58.630260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.494 [2024-11-06 14:07:58.630266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.494 [2024-11-06 14:07:58.636057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.494 [2024-11-06 14:07:58.636076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.494 [2024-11-06 14:07:58.636082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.494 [2024-11-06 14:07:58.642996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.494 [2024-11-06 14:07:58.643015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.494 [2024-11-06 14:07:58.643021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.494 [2024-11-06 14:07:58.649011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.494 [2024-11-06 14:07:58.649033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.494 [2024-11-06 14:07:58.649039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.494 [2024-11-06 14:07:58.654119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.494 [2024-11-06 14:07:58.654138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.494 [2024-11-06 14:07:58.654145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.494 [2024-11-06 14:07:58.657599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.494 [2024-11-06 14:07:58.657617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.494 [2024-11-06 14:07:58.657623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.494 [2024-11-06 14:07:58.660985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.494 [2024-11-06 14:07:58.661004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.494 [2024-11-06 14:07:58.661011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.494 [2024-11-06 14:07:58.666582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.494 [2024-11-06 14:07:58.666600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.494 [2024-11-06 14:07:58.666606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.494 [2024-11-06 14:07:58.673201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.494 [2024-11-06 14:07:58.673220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.494 [2024-11-06 14:07:58.673226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.494 [2024-11-06 14:07:58.679926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.494 [2024-11-06 14:07:58.679946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.494 [2024-11-06 14:07:58.679952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.494 [2024-11-06 14:07:58.684859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.494 [2024-11-06 14:07:58.684878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.494 [2024-11-06 14:07:58.684884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.494 [2024-11-06 14:07:58.690776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.494 [2024-11-06 14:07:58.690794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.494 [2024-11-06 14:07:58.690800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.494 [2024-11-06 14:07:58.695565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.494 [2024-11-06 14:07:58.695583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.494 [2024-11-06 14:07:58.695589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.494 [2024-11-06 14:07:58.700302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.494 [2024-11-06 14:07:58.700320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.494 [2024-11-06 14:07:58.700326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.494 [2024-11-06 14:07:58.705841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.494 [2024-11-06 14:07:58.705859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.494 [2024-11-06 14:07:58.705866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.494 [2024-11-06 14:07:58.712141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.494 [2024-11-06 14:07:58.712160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.494 [2024-11-06 14:07:58.712166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.494 [2024-11-06 14:07:58.718409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.494 [2024-11-06 14:07:58.718428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.494 [2024-11-06 14:07:58.718434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.494 [2024-11-06 14:07:58.721920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.494 [2024-11-06 14:07:58.721938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.494 [2024-11-06 14:07:58.721945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.494 [2024-11-06 14:07:58.725692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.494 [2024-11-06 14:07:58.725710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.494 [2024-11-06 14:07:58.725716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.494 [2024-11-06 14:07:58.730161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.494 [2024-11-06 14:07:58.730179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.494 [2024-11-06 14:07:58.730185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.494 [2024-11-06 14:07:58.735208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.494 [2024-11-06 14:07:58.735226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.494 [2024-11-06 14:07:58.735236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.494 [2024-11-06 14:07:58.739454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.494 [2024-11-06 14:07:58.739471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.494 [2024-11-06 14:07:58.739477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.494 [2024-11-06 14:07:58.745673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.495 [2024-11-06 14:07:58.745691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.495 [2024-11-06 14:07:58.745698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.495 [2024-11-06 14:07:58.751342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.495 [2024-11-06 14:07:58.751361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.495 [2024-11-06 14:07:58.751367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.495 [2024-11-06 14:07:58.757512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.495 [2024-11-06 14:07:58.757530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.495 [2024-11-06 14:07:58.757537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.495 [2024-11-06 14:07:58.762812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.495 [2024-11-06 14:07:58.762830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.495 [2024-11-06 14:07:58.762837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.495 [2024-11-06 14:07:58.767500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.495 [2024-11-06 14:07:58.767518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.495 [2024-11-06 14:07:58.767525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.495 [2024-11-06 14:07:58.772418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.495 [2024-11-06 14:07:58.772436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.495 [2024-11-06 14:07:58.772442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.754 [2024-11-06 14:07:58.780120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.754 [2024-11-06 14:07:58.780138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.754 [2024-11-06 14:07:58.780145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.754 [2024-11-06 14:07:58.786315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.754 [2024-11-06 14:07:58.786336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.754 [2024-11-06 14:07:58.786342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.754 [2024-11-06 14:07:58.792268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.754 [2024-11-06 14:07:58.792286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.754 [2024-11-06 14:07:58.792292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.754 [2024-11-06 14:07:58.798301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.754 [2024-11-06 14:07:58.798319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.754 [2024-11-06 14:07:58.798325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.754 [2024-11-06 14:07:58.803288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.754 [2024-11-06 14:07:58.803306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.754 [2024-11-06 14:07:58.803312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.754 [2024-11-06 14:07:58.808335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.754 [2024-11-06 14:07:58.808353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.754 [2024-11-06 14:07:58.808360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.754 [2024-11-06 14:07:58.813647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.754 [2024-11-06 14:07:58.813664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.754 [2024-11-06 14:07:58.813671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.754 [2024-11-06 14:07:58.819653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.754 [2024-11-06 14:07:58.819671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.754 [2024-11-06 14:07:58.819677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.754 [2024-11-06 14:07:58.824368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.754 [2024-11-06 14:07:58.824386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.754 [2024-11-06 14:07:58.824393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.754 [2024-11-06 14:07:58.829576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.754 [2024-11-06 14:07:58.829594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.754 [2024-11-06 14:07:58.829600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.754 [2024-11-06 14:07:58.834666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.754 [2024-11-06 14:07:58.834684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.754 [2024-11-06 14:07:58.834690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.754 [2024-11-06 14:07:58.839796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.754 [2024-11-06 14:07:58.839815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.754 [2024-11-06 14:07:58.839821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.754 [2024-11-06 14:07:58.844666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.754 [2024-11-06 14:07:58.844684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.754 [2024-11-06 14:07:58.844690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.755 [2024-11-06 14:07:58.848263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.755 [2024-11-06 14:07:58.848284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.755 [2024-11-06 14:07:58.848290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.755 [2024-11-06 14:07:58.852825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.755 [2024-11-06 14:07:58.852844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.755 [2024-11-06 14:07:58.852850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.755 [2024-11-06 14:07:58.857945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.755 [2024-11-06 14:07:58.857964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.755 [2024-11-06 14:07:58.857970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.755 [2024-11-06 14:07:58.863065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.755 [2024-11-06 14:07:58.863083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.755 [2024-11-06 14:07:58.863089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.755 [2024-11-06 14:07:58.867786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.755 [2024-11-06 14:07:58.867805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.755 [2024-11-06 14:07:58.867811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.755 [2024-11-06 14:07:58.873143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.755 [2024-11-06 14:07:58.873165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.755 [2024-11-06 14:07:58.873171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.755 [2024-11-06 14:07:58.879794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.755 [2024-11-06 14:07:58.879813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.755 [2024-11-06 14:07:58.879820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.755 [2024-11-06 14:07:58.887232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.755 [2024-11-06 14:07:58.887257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.755 [2024-11-06 14:07:58.887263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.755 [2024-11-06 14:07:58.892796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.755 [2024-11-06 14:07:58.892814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.755 [2024-11-06 14:07:58.892820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.755 [2024-11-06 14:07:58.898911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.755 [2024-11-06 14:07:58.898929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.755 [2024-11-06 14:07:58.898936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.755 [2024-11-06 14:07:58.906176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.755 [2024-11-06 14:07:58.906193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.755 [2024-11-06 14:07:58.906200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.755 [2024-11-06 14:07:58.909858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.755 [2024-11-06 14:07:58.909875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.755 [2024-11-06 14:07:58.909881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.755 [2024-11-06 14:07:58.914834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.755 [2024-11-06 14:07:58.914852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.755 [2024-11-06 14:07:58.914859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.755 [2024-11-06 14:07:58.920311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.755 [2024-11-06 14:07:58.920336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.755 [2024-11-06 14:07:58.920342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.755 [2024-11-06 14:07:58.925679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.755 [2024-11-06 14:07:58.925696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.755 [2024-11-06 14:07:58.925702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.755 [2024-11-06 14:07:58.930558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.755 [2024-11-06 14:07:58.930576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.755 [2024-11-06 14:07:58.930582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.755 [2024-11-06 14:07:58.935751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.755 [2024-11-06 14:07:58.935769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.755 [2024-11-06 14:07:58.935776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.755 [2024-11-06 14:07:58.940492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.755 [2024-11-06 14:07:58.940510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.755 [2024-11-06 14:07:58.940516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.755 [2024-11-06 14:07:58.947641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.755 [2024-11-06 14:07:58.947659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.755 [2024-11-06 14:07:58.947666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.755 [2024-11-06 14:07:58.953771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.755 [2024-11-06 14:07:58.953790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.755 [2024-11-06 14:07:58.953796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.755 [2024-11-06 14:07:58.959140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.755 [2024-11-06 14:07:58.959158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.755 [2024-11-06 14:07:58.959164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.755 [2024-11-06 14:07:58.964654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f25bb0) 00:24:19.755 [2024-11-06 14:07:58.964672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.755 [2024-11-06 14:07:58.964679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.755 5772.50 IOPS, 721.56 MiB/s 00:24:19.755 Latency(us) 00:24:19.755 [2024-11-06T13:07:59.039Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.755 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:19.755 nvme0n1 : 2.00 5770.98 721.37 0.00 0.00 2769.76 542.72 12178.77 00:24:19.755 [2024-11-06T13:07:59.039Z] =================================================================================================================== 00:24:19.755 [2024-11-06T13:07:59.039Z] Total : 5770.98 721.37 0.00 0.00 2769.76 542.72 12178.77 00:24:19.755 { 00:24:19.755 "results": [ 00:24:19.755 { 00:24:19.755 "job": "nvme0n1", 00:24:19.755 "core_mask": "0x2", 00:24:19.755 "workload": "randread", 00:24:19.755 "status": "finished", 00:24:19.755 "queue_depth": 16, 00:24:19.755 "io_size": 131072, 00:24:19.755 "runtime": 2.003298, 00:24:19.755 "iops": 5770.983647964506, 00:24:19.755 "mibps": 721.3729559955633, 00:24:19.755 "io_failed": 0, 00:24:19.755 "io_timeout": 0, 00:24:19.755 "avg_latency_us": 2769.7639627483204, 00:24:19.755 "min_latency_us": 542.72, 00:24:19.755 "max_latency_us": 12178.773333333333 00:24:19.755 } 00:24:19.755 ], 00:24:19.755 "core_count": 1 00:24:19.755 } 00:24:19.755 14:07:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:19.755 14:07:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:19.755 14:07:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:19.755 | .driver_specific 00:24:19.756 | .nvme_error 00:24:19.756 | .status_code 00:24:19.756 | .command_transient_transport_error' 00:24:19.756 14:07:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:20.015 14:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 372 > 0 )) 00:24:20.015 14:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1030754 00:24:20.015 14:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 1030754 ']' 00:24:20.015 14:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 1030754 00:24:20.015 14:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:24:20.015 14:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:20.015 14:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1030754 00:24:20.015 14:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:20.015 14:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:20.015 14:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1030754' 00:24:20.015 killing process with pid 1030754 00:24:20.015 14:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 1030754 00:24:20.015 Received shutdown signal, test time was about 2.000000 seconds 00:24:20.015 00:24:20.015 Latency(us) 00:24:20.015 [2024-11-06T13:07:59.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.015 [2024-11-06T13:07:59.299Z] =================================================================================================================== 00:24:20.015 [2024-11-06T13:07:59.299Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:20.015 14:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 1030754 00:24:20.015 14:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:24:20.015 14:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:20.015 14:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:20.015 14:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:20.015 14:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:20.015 14:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1031429 00:24:20.015 14:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1031429 /var/tmp/bperf.sock 00:24:20.015 14:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 1031429 ']' 00:24:20.015 14:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:20.015 14:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:20.015 14:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:20.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:20.015 14:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:20.015 14:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:20.015 14:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:24:20.274 [2024-11-06 14:07:59.328185] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:24:20.274 [2024-11-06 14:07:59.328239] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1031429 ] 00:24:20.274 [2024-11-06 14:07:59.393926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.275 [2024-11-06 14:07:59.422690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.275 14:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:20.275 14:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:24:20.275 14:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:20.275 14:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:20.533 14:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:20.533 14:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.533 14:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:20.533 14:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.533 14:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:20.533 14:07:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:20.792 nvme0n1 00:24:20.792 14:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:20.792 14:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.792 14:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:20.792 14:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.792 14:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:20.792 14:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:21.052 Running I/O for 2 seconds... 00:24:21.052 [2024-11-06 14:08:00.100814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016eebb98 00:24:21.052 [2024-11-06 14:08:00.101683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.052 [2024-11-06 14:08:00.101713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:21.052 [2024-11-06 14:08:00.109572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016eecc78 00:24:21.052 [2024-11-06 14:08:00.110455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.052 [2024-11-06 14:08:00.110474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.052 [2024-11-06 14:08:00.118131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016eedd58 00:24:21.052 [2024-11-06 14:08:00.118946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.052 [2024-11-06 14:08:00.118963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.052 [2024-11-06 14:08:00.126678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016eeee38 00:24:21.052 [2024-11-06 14:08:00.127510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.052 [2024-11-06 14:08:00.127526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.052 [2024-11-06 14:08:00.135200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016eeff18 00:24:21.052 [2024-11-06 14:08:00.136054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.052 [2024-11-06 14:08:00.136070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.052 [2024-11-06 14:08:00.143713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ef0ff8 00:24:21.052 [2024-11-06 14:08:00.144559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.052 [2024-11-06 14:08:00.144575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.052 [2024-11-06 14:08:00.152213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ef20d8 00:24:21.052 [2024-11-06 14:08:00.153102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.052 [2024-11-06 14:08:00.153118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.052 [2024-11-06 14:08:00.160740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ef31b8 00:24:21.052 [2024-11-06 14:08:00.161576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.052 [2024-11-06 14:08:00.161593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.052 [2024-11-06 14:08:00.169242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee6738 00:24:21.052 [2024-11-06 14:08:00.170115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.052 [2024-11-06 14:08:00.170135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.052 [2024-11-06 14:08:00.177760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee5658 00:24:21.052 [2024-11-06 14:08:00.178607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.052 [2024-11-06 14:08:00.178623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.052 [2024-11-06 14:08:00.186233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee4578 00:24:21.052 [2024-11-06 14:08:00.187081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.052 [2024-11-06 14:08:00.187097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.052 [2024-11-06 14:08:00.194705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016edf118 00:24:21.052 [2024-11-06 14:08:00.195569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.052 [2024-11-06 14:08:00.195585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.052 [2024-11-06 14:08:00.203165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:21.052 [2024-11-06 14:08:00.204026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.052 [2024-11-06 14:08:00.204042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.052 [2024-11-06 14:08:00.211635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee12d8 00:24:21.052 [2024-11-06 14:08:00.212464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.052 [2024-11-06 14:08:00.212480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.053 [2024-11-06 14:08:00.220108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee23b8 00:24:21.053 [2024-11-06 14:08:00.220948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.053 [2024-11-06 14:08:00.220964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.053 [2024-11-06 14:08:00.228575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee3498 00:24:21.053 [2024-11-06 14:08:00.229444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.053 [2024-11-06 14:08:00.229460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.053 [2024-11-06 14:08:00.237039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016eeb760 00:24:21.053 [2024-11-06 14:08:00.237881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.053 [2024-11-06 14:08:00.237897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.053 [2024-11-06 14:08:00.245508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016eec840 00:24:21.053 [2024-11-06 14:08:00.246366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.053 [2024-11-06 14:08:00.246383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.053 [2024-11-06 14:08:00.253954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016eed920 00:24:21.053 [2024-11-06 14:08:00.254759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.053 [2024-11-06 14:08:00.254775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.053 [2024-11-06 14:08:00.262441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016eeea00 00:24:21.053 [2024-11-06 14:08:00.263269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.053 [2024-11-06 14:08:00.263285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.053 [2024-11-06 14:08:00.270910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016eefae0 00:24:21.053 [2024-11-06 14:08:00.271739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.053 [2024-11-06 14:08:00.271754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.053 [2024-11-06 14:08:00.279369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ef0bc0 00:24:21.053 [2024-11-06 14:08:00.280230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.053 [2024-11-06 14:08:00.280249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.053 [2024-11-06 14:08:00.287817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ef1ca0 00:24:21.053 [2024-11-06 14:08:00.288682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.053 [2024-11-06 14:08:00.288698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.053 [2024-11-06 14:08:00.296261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ef2d80 00:24:21.053 [2024-11-06 14:08:00.297056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.053 [2024-11-06 14:08:00.297072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.053 [2024-11-06 14:08:00.304714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ef3e60 00:24:21.053 [2024-11-06 14:08:00.305555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.053 [2024-11-06 14:08:00.305570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.053 [2024-11-06 14:08:00.313179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee6300 00:24:21.053 [2024-11-06 14:08:00.313992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.053 [2024-11-06 14:08:00.314009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.053 [2024-11-06 14:08:00.321633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee5220 00:24:21.053 [2024-11-06 14:08:00.322517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.053 [2024-11-06 14:08:00.322532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.053 [2024-11-06 14:08:00.330083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee4140 00:24:21.053 [2024-11-06 14:08:00.330970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.053 [2024-11-06 14:08:00.330986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.313 [2024-11-06 14:08:00.338545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016edf550 00:24:21.313 [2024-11-06 14:08:00.339400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.313 [2024-11-06 14:08:00.339416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.313 [2024-11-06 14:08:00.346992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee0630 00:24:21.313 [2024-11-06 14:08:00.347836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.313 [2024-11-06 14:08:00.347852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.313 [2024-11-06 14:08:00.355474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee1710 00:24:21.313 [2024-11-06 14:08:00.356332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.313 [2024-11-06 14:08:00.356347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.313 [2024-11-06 14:08:00.363952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee27f0 00:24:21.313 [2024-11-06 14:08:00.364792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.313 [2024-11-06 14:08:00.364808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.313 [2024-11-06 14:08:00.372397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee38d0 00:24:21.313 [2024-11-06 14:08:00.373254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.313 [2024-11-06 14:08:00.373270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.313 [2024-11-06 14:08:00.380856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016eebb98 00:24:21.313 [2024-11-06 14:08:00.381692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.313 [2024-11-06 14:08:00.381708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.313 [2024-11-06 14:08:00.389288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016eecc78 00:24:21.314 [2024-11-06 14:08:00.390148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.314 [2024-11-06 14:08:00.390166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.314 [2024-11-06 14:08:00.397913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016eedd58 00:24:21.314 [2024-11-06 14:08:00.398779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.314 [2024-11-06 14:08:00.398795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.314 [2024-11-06 14:08:00.406390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016eeee38 00:24:21.314 [2024-11-06 14:08:00.407250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.314 [2024-11-06 14:08:00.407266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.314 [2024-11-06 14:08:00.414839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016eeff18 00:24:21.314 [2024-11-06 14:08:00.415691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.314 [2024-11-06 14:08:00.415707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.314 [2024-11-06 14:08:00.423303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ef0ff8 00:24:21.314 [2024-11-06 14:08:00.424144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.314 [2024-11-06 14:08:00.424160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.314 [2024-11-06 14:08:00.431757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ef20d8 00:24:21.314 [2024-11-06 14:08:00.432586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.314 [2024-11-06 14:08:00.432603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.314 [2024-11-06 14:08:00.440200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ef31b8 00:24:21.314 [2024-11-06 14:08:00.441070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.314 [2024-11-06 14:08:00.441085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.314 [2024-11-06 14:08:00.448667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee6738 00:24:21.314 [2024-11-06 14:08:00.449492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.314 [2024-11-06 14:08:00.449507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.314 [2024-11-06 14:08:00.457124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee5658 00:24:21.314 [2024-11-06 14:08:00.457990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.314 [2024-11-06 14:08:00.458006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.314 [2024-11-06 14:08:00.465578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee4578 00:24:21.314 [2024-11-06 14:08:00.466455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.314 [2024-11-06 14:08:00.466471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.314 [2024-11-06 14:08:00.474031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016edf118 00:24:21.314 [2024-11-06 14:08:00.474845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.314 [2024-11-06 14:08:00.474861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.314 [2024-11-06 14:08:00.482477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:21.314 [2024-11-06 14:08:00.483304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.314 [2024-11-06 14:08:00.483320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.314 [2024-11-06 14:08:00.490924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee12d8 00:24:21.314 [2024-11-06 14:08:00.491748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.314 [2024-11-06 14:08:00.491763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.314 [2024-11-06 14:08:00.499402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee23b8 00:24:21.314 [2024-11-06 14:08:00.500223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.314 [2024-11-06 14:08:00.500239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.314 [2024-11-06 14:08:00.507847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee3498 00:24:21.314 [2024-11-06 14:08:00.508691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.314 [2024-11-06 14:08:00.508707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.314 [2024-11-06 14:08:00.516300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016eeb760 00:24:21.314 [2024-11-06 14:08:00.517165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.314 [2024-11-06 14:08:00.517181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.314 [2024-11-06 14:08:00.524761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016eec840 00:24:21.314 [2024-11-06 14:08:00.525568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.314 [2024-11-06 14:08:00.525584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.314 [2024-11-06 14:08:00.533212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016eed920 00:24:21.314 [2024-11-06 14:08:00.534050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.314 [2024-11-06 14:08:00.534066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.314 [2024-11-06 14:08:00.542732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016eeea00 00:24:21.314 [2024-11-06 14:08:00.544063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.314 [2024-11-06 14:08:00.544078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.314 [2024-11-06 14:08:00.550582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016edf118 00:24:21.314 [2024-11-06 14:08:00.551959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.314 [2024-11-06 14:08:00.551975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:21.314 [2024-11-06 14:08:00.559569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016efbcf0 00:24:21.314 [2024-11-06 14:08:00.560535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.314 [2024-11-06 14:08:00.560550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:21.314 [2024-11-06 14:08:00.568557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee9168 00:24:21.314 [2024-11-06 14:08:00.569744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.314 [2024-11-06 14:08:00.569760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:21.314 [2024-11-06 14:08:00.576932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016eff3c8 00:24:21.314 [2024-11-06 14:08:00.578157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.314 [2024-11-06 14:08:00.578173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:21.314 [2024-11-06 14:08:00.584900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee12d8 00:24:21.314 [2024-11-06 14:08:00.585954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.314 [2024-11-06 14:08:00.585970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:21.314 [2024-11-06 14:08:00.593381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ede038 00:24:21.314 [2024-11-06 14:08:00.594425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:23547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.314 [2024-11-06 14:08:00.594440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:21.574 [2024-11-06 14:08:00.601848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee12d8 00:24:21.574 [2024-11-06 14:08:00.602908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.574 [2024-11-06 14:08:00.602923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:21.574 [2024-11-06 14:08:00.610921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ede038 00:24:21.574 [2024-11-06 14:08:00.612221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.574 [2024-11-06 14:08:00.612240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:21.574 [2024-11-06 14:08:00.619522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ef46d0 00:24:21.574 [2024-11-06 14:08:00.620838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.574 [2024-11-06 14:08:00.620853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:21.574 [2024-11-06 14:08:00.626568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ef7100 00:24:21.574 [2024-11-06 14:08:00.627180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.574 [2024-11-06 14:08:00.627196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:21.574 [2024-11-06 14:08:00.635628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016eea680 00:24:21.574 [2024-11-06 14:08:00.636607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.574 [2024-11-06 14:08:00.636622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:21.574 [2024-11-06 14:08:00.644093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee4578 00:24:21.574 [2024-11-06 14:08:00.645085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.574 [2024-11-06 14:08:00.645100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:21.574 [2024-11-06 14:08:00.652178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016edece0 00:24:21.574 [2024-11-06 14:08:00.653099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.574 [2024-11-06 14:08:00.653114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:21.574 [2024-11-06 14:08:00.660916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee9168 00:24:21.574 [2024-11-06 14:08:00.661704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.574 [2024-11-06 14:08:00.661721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:21.574 [2024-11-06 14:08:00.669360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ef6cc8 00:24:21.574 [2024-11-06 14:08:00.670121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.574 [2024-11-06 14:08:00.670137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:21.574 [2024-11-06 14:08:00.677842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016edece0 00:24:21.574 [2024-11-06 14:08:00.678638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.574 [2024-11-06 14:08:00.678655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:21.574 [2024-11-06 14:08:00.686173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ef2d80 00:24:21.574 [2024-11-06 14:08:00.686858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.574 [2024-11-06 14:08:00.686875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.574 [2024-11-06 14:08:00.694490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016efdeb0 00:24:21.574 [2024-11-06 14:08:00.695309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.574 [2024-11-06 14:08:00.695325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.574 [2024-11-06 14:08:00.703513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ede8a8 00:24:21.574 [2024-11-06 14:08:00.704530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.574 [2024-11-06 14:08:00.704546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.574 [2024-11-06 14:08:00.711949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee5220 00:24:21.574 [2024-11-06 14:08:00.712988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.574 [2024-11-06 14:08:00.713003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.574 [2024-11-06 14:08:00.720442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee4de8 00:24:21.574 [2024-11-06 14:08:00.721471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.574 [2024-11-06 14:08:00.721487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.574 [2024-11-06 14:08:00.728909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee3060 00:24:21.574 [2024-11-06 14:08:00.729924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.574 [2024-11-06 14:08:00.729940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.574 [2024-11-06 14:08:00.737378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016efc560 00:24:21.574 [2024-11-06 14:08:00.738406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.574 [2024-11-06 14:08:00.738422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.574 [2024-11-06 14:08:00.745838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016efc998 00:24:21.574 [2024-11-06 14:08:00.746886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.574 [2024-11-06 14:08:00.746901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.574 [2024-11-06 14:08:00.754295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ef31b8 00:24:21.575 [2024-11-06 14:08:00.755332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.575 [2024-11-06 14:08:00.755348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.575 [2024-11-06 14:08:00.762738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee5658 00:24:21.575 [2024-11-06 14:08:00.763715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.575 [2024-11-06 14:08:00.763730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.575 [2024-11-06 14:08:00.771205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ef0788 00:24:21.575 [2024-11-06 14:08:00.772216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.575 [2024-11-06 14:08:00.772231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.575 [2024-11-06 14:08:00.779693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee3498 00:24:21.575 [2024-11-06 14:08:00.780718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.575 [2024-11-06 14:08:00.780734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.575 [2024-11-06 14:08:00.788148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016eddc00 00:24:21.575 [2024-11-06 14:08:00.789163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.575 [2024-11-06 14:08:00.789178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.575 [2024-11-06 14:08:00.796603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ef2948 00:24:21.575 [2024-11-06 14:08:00.797641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.575 [2024-11-06 14:08:00.797657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.575 [2024-11-06 14:08:00.805041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ede8a8 00:24:21.575 [2024-11-06 14:08:00.806062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.575 [2024-11-06 14:08:00.806077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.575 [2024-11-06 14:08:00.813502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee5220 00:24:21.575 [2024-11-06 14:08:00.814511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.575 [2024-11-06 14:08:00.814527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.575 [2024-11-06 14:08:00.821968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee4de8 00:24:21.575 [2024-11-06 14:08:00.822990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.575 [2024-11-06 14:08:00.823006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.575 [2024-11-06 14:08:00.830423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee3060 00:24:21.575 [2024-11-06 14:08:00.831417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.575 [2024-11-06 14:08:00.831436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.575 [2024-11-06 14:08:00.838876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016efc560 00:24:21.575 [2024-11-06 14:08:00.839898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.575 [2024-11-06 14:08:00.839913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.575 [2024-11-06 14:08:00.847324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016efc998 00:24:21.575 [2024-11-06 14:08:00.848340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.575 [2024-11-06 14:08:00.848355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.575 [2024-11-06 14:08:00.855767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ef31b8 00:24:21.575 [2024-11-06 14:08:00.856789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.575 [2024-11-06 14:08:00.856805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.835 [2024-11-06 14:08:00.864254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee5658 00:24:21.835 [2024-11-06 14:08:00.865276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.835 [2024-11-06 14:08:00.865292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.835 [2024-11-06 14:08:00.872714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ef0788 00:24:21.835 [2024-11-06 14:08:00.873740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.835 [2024-11-06 14:08:00.873755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.835 [2024-11-06 14:08:00.882204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee3498 00:24:21.835 [2024-11-06 14:08:00.883623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.835 [2024-11-06 14:08:00.883638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.835 [2024-11-06 14:08:00.888215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee9e10 00:24:21.835 [2024-11-06 14:08:00.888854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.835 [2024-11-06 14:08:00.888870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:21.835 [2024-11-06 14:08:00.896816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee4578 00:24:21.835 [2024-11-06 14:08:00.897487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.835 [2024-11-06 14:08:00.897503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:21.835 [2024-11-06 14:08:00.905297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ef57b0 00:24:21.835 [2024-11-06 14:08:00.905942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.835 [2024-11-06 14:08:00.905958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:21.835 [2024-11-06 14:08:00.913770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ef6890 00:24:21.835 [2024-11-06 14:08:00.914378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.835 [2024-11-06 14:08:00.914394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:21.835 [2024-11-06 14:08:00.922224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016eebb98 00:24:21.835 [2024-11-06 14:08:00.922869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.835 [2024-11-06 14:08:00.922885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:21.835 [2024-11-06 14:08:00.930672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee1f80 00:24:21.835 [2024-11-06 14:08:00.931335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.835 [2024-11-06 14:08:00.931351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:21.835 [2024-11-06 14:08:00.939109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee0ea0 00:24:21.835 [2024-11-06 14:08:00.939751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.835 [2024-11-06 14:08:00.939767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:21.835 [2024-11-06 14:08:00.947553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016edfdc0 00:24:21.835 [2024-11-06 14:08:00.948153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.835 [2024-11-06 14:08:00.948169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:21.835 [2024-11-06 14:08:00.956016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016eea680 00:24:21.835 [2024-11-06 14:08:00.956662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.835 [2024-11-06 14:08:00.956678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:21.835 [2024-11-06 14:08:00.964484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016eefae0 00:24:21.835 [2024-11-06 14:08:00.965129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.835 [2024-11-06 14:08:00.965145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:21.835 [2024-11-06 14:08:00.972939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ef7100 00:24:21.835 [2024-11-06 14:08:00.973547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.835 [2024-11-06 14:08:00.973563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:21.835 [2024-11-06 14:08:00.981508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016eed920 00:24:21.835 [2024-11-06 14:08:00.982149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.835 [2024-11-06 14:08:00.982165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:21.835 [2024-11-06 14:08:00.989973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ede470 00:24:21.835 [2024-11-06 14:08:00.990619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.835 [2024-11-06 14:08:00.990635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:21.835 [2024-11-06 14:08:00.998453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016eee190 00:24:21.835 [2024-11-06 14:08:00.999092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.835 [2024-11-06 14:08:00.999108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:21.835 [2024-11-06 14:08:01.006920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee5658 00:24:21.835 [2024-11-06 14:08:01.007569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.835 [2024-11-06 14:08:01.007585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:21.835 [2024-11-06 14:08:01.015365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016eeb760 00:24:21.835 [2024-11-06 14:08:01.015974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.835 [2024-11-06 14:08:01.015990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:21.835 [2024-11-06 14:08:01.023812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee9168 00:24:21.836 [2024-11-06 14:08:01.024461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.836 [2024-11-06 14:08:01.024477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:21.836 [2024-11-06 14:08:01.032397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ef31b8 00:24:21.836 [2024-11-06 14:08:01.033038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:10472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.836 [2024-11-06 14:08:01.033053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:21.836 [2024-11-06 14:08:01.040842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ef5378 00:24:21.836 [2024-11-06 14:08:01.041473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.836 [2024-11-06 14:08:01.041489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:21.836 [2024-11-06 14:08:01.049318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ef6458 00:24:21.836 [2024-11-06 14:08:01.049925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.836 [2024-11-06 14:08:01.049943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:21.836 [2024-11-06 14:08:01.057774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016eebfd0 00:24:21.836 [2024-11-06 14:08:01.058447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.836 [2024-11-06 14:08:01.058463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:21.836 [2024-11-06 14:08:01.066225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ef0350 00:24:21.836 [2024-11-06 14:08:01.066868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.836 [2024-11-06 14:08:01.066883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:21.836 [2024-11-06 14:08:01.074683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee1b48 00:24:21.836 [2024-11-06 14:08:01.075345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.836 [2024-11-06 14:08:01.075361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:21.836 [2024-11-06 14:08:01.083113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee0a68 00:24:21.836 [2024-11-06 14:08:01.083758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.836 [2024-11-06 14:08:01.083774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:21.836 29838.00 IOPS, 116.55 MiB/s [2024-11-06T13:08:01.120Z] [2024-11-06 14:08:01.091561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:21.836 [2024-11-06 14:08:01.092188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.836 [2024-11-06 14:08:01.092204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:21.836 [2024-11-06 14:08:01.100021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:21.836 [2024-11-06 14:08:01.100651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.836 [2024-11-06 14:08:01.100667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:21.836 [2024-11-06 14:08:01.108482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:21.836 [2024-11-06 14:08:01.109110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.836 [2024-11-06 14:08:01.109126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:21.836 [2024-11-06 14:08:01.116937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:21.836 [2024-11-06 14:08:01.117573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.836 [2024-11-06 14:08:01.117589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.095 [2024-11-06 14:08:01.125385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.095 [2024-11-06 14:08:01.126019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.095 [2024-11-06 14:08:01.126035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.095 [2024-11-06 14:08:01.133838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.095 [2024-11-06 14:08:01.134477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:10019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.095 [2024-11-06 14:08:01.134493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.095 [2024-11-06 14:08:01.142302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.095 [2024-11-06 14:08:01.142938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.095 [2024-11-06 14:08:01.142955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.095 [2024-11-06 14:08:01.150751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.095 [2024-11-06 14:08:01.151381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.095 [2024-11-06 14:08:01.151396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.095 [2024-11-06 14:08:01.159201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.095 [2024-11-06 14:08:01.159837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.095 [2024-11-06 14:08:01.159853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.095 [2024-11-06 14:08:01.167681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.095 [2024-11-06 14:08:01.168336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.095 [2024-11-06 14:08:01.168351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.095 [2024-11-06 14:08:01.176104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.095 [2024-11-06 14:08:01.176739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.095 [2024-11-06 14:08:01.176756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.095 [2024-11-06 14:08:01.184594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.095 [2024-11-06 14:08:01.185251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.095 [2024-11-06 14:08:01.185267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.095 [2024-11-06 14:08:01.193076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.095 [2024-11-06 14:08:01.193721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.095 [2024-11-06 14:08:01.193736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.095 [2024-11-06 14:08:01.201551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.095 [2024-11-06 14:08:01.202205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.095 [2024-11-06 14:08:01.202221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.095 [2024-11-06 14:08:01.210007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.095 [2024-11-06 14:08:01.210662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.095 [2024-11-06 14:08:01.210678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.095 [2024-11-06 14:08:01.218454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.095 [2024-11-06 14:08:01.219093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.095 [2024-11-06 14:08:01.219109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.095 [2024-11-06 14:08:01.226908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.095 [2024-11-06 14:08:01.227541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.095 [2024-11-06 14:08:01.227557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.095 [2024-11-06 14:08:01.235380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.095 [2024-11-06 14:08:01.235997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.095 [2024-11-06 14:08:01.236013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.095 [2024-11-06 14:08:01.243830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.095 [2024-11-06 14:08:01.244472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.095 [2024-11-06 14:08:01.244488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.095 [2024-11-06 14:08:01.252288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.095 [2024-11-06 14:08:01.252923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.095 [2024-11-06 14:08:01.252939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.095 [2024-11-06 14:08:01.260730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.095 [2024-11-06 14:08:01.261359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.095 [2024-11-06 14:08:01.261376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.095 [2024-11-06 14:08:01.269174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.095 [2024-11-06 14:08:01.269812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.095 [2024-11-06 14:08:01.269831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.095 [2024-11-06 14:08:01.277658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.096 [2024-11-06 14:08:01.278313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.096 [2024-11-06 14:08:01.278329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.096 [2024-11-06 14:08:01.286138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.096 [2024-11-06 14:08:01.286774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.096 [2024-11-06 14:08:01.286791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.096 [2024-11-06 14:08:01.294627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.096 [2024-11-06 14:08:01.295268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.096 [2024-11-06 14:08:01.295284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.096 [2024-11-06 14:08:01.303082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.096 [2024-11-06 14:08:01.303680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.096 [2024-11-06 14:08:01.303696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.096 [2024-11-06 14:08:01.311532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.096 [2024-11-06 14:08:01.312169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.096 [2024-11-06 14:08:01.312185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.096 [2024-11-06 14:08:01.320090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.096 [2024-11-06 14:08:01.320753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.096 [2024-11-06 14:08:01.320769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.096 [2024-11-06 14:08:01.328574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.096 [2024-11-06 14:08:01.329207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.096 [2024-11-06 14:08:01.329223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.096 [2024-11-06 14:08:01.337030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.096 [2024-11-06 14:08:01.337669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.096 [2024-11-06 14:08:01.337685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.096 [2024-11-06 14:08:01.345497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.096 [2024-11-06 14:08:01.346129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.096 [2024-11-06 14:08:01.346145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.096 [2024-11-06 14:08:01.353963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.096 [2024-11-06 14:08:01.354614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.096 [2024-11-06 14:08:01.354630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.096 [2024-11-06 14:08:01.362414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.096 [2024-11-06 14:08:01.363055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.096 [2024-11-06 14:08:01.363072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.096 [2024-11-06 14:08:01.370887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.096 [2024-11-06 14:08:01.371524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.096 [2024-11-06 14:08:01.371541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.355 [2024-11-06 14:08:01.379369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.355 [2024-11-06 14:08:01.380013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.355 [2024-11-06 14:08:01.380028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.355 [2024-11-06 14:08:01.387836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.355 [2024-11-06 14:08:01.388485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.355 [2024-11-06 14:08:01.388501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.355 [2024-11-06 14:08:01.396305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.355 [2024-11-06 14:08:01.397085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.355 [2024-11-06 14:08:01.397100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.355 [2024-11-06 14:08:01.404918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.355 [2024-11-06 14:08:01.405534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.355 [2024-11-06 14:08:01.405550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.355 [2024-11-06 14:08:01.413391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.355 [2024-11-06 14:08:01.414021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.355 [2024-11-06 14:08:01.414037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.355 [2024-11-06 14:08:01.421869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.355 [2024-11-06 14:08:01.422479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.355 [2024-11-06 14:08:01.422495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.355 [2024-11-06 14:08:01.430327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.355 [2024-11-06 14:08:01.430920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.355 [2024-11-06 14:08:01.430935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.355 [2024-11-06 14:08:01.438796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.355 [2024-11-06 14:08:01.439437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.355 [2024-11-06 14:08:01.439452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.355 [2024-11-06 14:08:01.447256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.355 [2024-11-06 14:08:01.447916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.355 [2024-11-06 14:08:01.447931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.355 [2024-11-06 14:08:01.455701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.355 [2024-11-06 14:08:01.456341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.355 [2024-11-06 14:08:01.456357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.355 [2024-11-06 14:08:01.464166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.355 [2024-11-06 14:08:01.464800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.355 [2024-11-06 14:08:01.464817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.355 [2024-11-06 14:08:01.472634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.355 [2024-11-06 14:08:01.473264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.355 [2024-11-06 14:08:01.473280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.355 [2024-11-06 14:08:01.481097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.355 [2024-11-06 14:08:01.481744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.355 [2024-11-06 14:08:01.481761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.355 [2024-11-06 14:08:01.489560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.355 [2024-11-06 14:08:01.490192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.355 [2024-11-06 14:08:01.490210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.355 [2024-11-06 14:08:01.497994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.355 [2024-11-06 14:08:01.498636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.355 [2024-11-06 14:08:01.498653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.355 [2024-11-06 14:08:01.506482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.355 [2024-11-06 14:08:01.507118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.355 [2024-11-06 14:08:01.507133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.355 [2024-11-06 14:08:01.514957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.355 [2024-11-06 14:08:01.515610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.355 [2024-11-06 14:08:01.515626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.355 [2024-11-06 14:08:01.523442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.355 [2024-11-06 14:08:01.524103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.355 [2024-11-06 14:08:01.524119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.355 [2024-11-06 14:08:01.531907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.355 [2024-11-06 14:08:01.532543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.355 [2024-11-06 14:08:01.532559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.355 [2024-11-06 14:08:01.540381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.355 [2024-11-06 14:08:01.541035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.355 [2024-11-06 14:08:01.541052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.355 [2024-11-06 14:08:01.548824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.355 [2024-11-06 14:08:01.549451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.355 [2024-11-06 14:08:01.549466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.355 [2024-11-06 14:08:01.557300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.355 [2024-11-06 14:08:01.557931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.355 [2024-11-06 14:08:01.557947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.355 [2024-11-06 14:08:01.565767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.356 [2024-11-06 14:08:01.566380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.356 [2024-11-06 14:08:01.566402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.356 [2024-11-06 14:08:01.574228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.356 [2024-11-06 14:08:01.574864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.356 [2024-11-06 14:08:01.574880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.356 [2024-11-06 14:08:01.582696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.356 [2024-11-06 14:08:01.583325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.356 [2024-11-06 14:08:01.583340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.356 [2024-11-06 14:08:01.591135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.356 [2024-11-06 14:08:01.591777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.356 [2024-11-06 14:08:01.591793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.356 [2024-11-06 14:08:01.599608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.356 [2024-11-06 14:08:01.600201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.356 [2024-11-06 14:08:01.600217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.356 [2024-11-06 14:08:01.608087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.356 [2024-11-06 14:08:01.608745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.356 [2024-11-06 14:08:01.608762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.356 [2024-11-06 14:08:01.616564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.356 [2024-11-06 14:08:01.617226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.356 [2024-11-06 14:08:01.617242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.356 [2024-11-06 14:08:01.625024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.356 [2024-11-06 14:08:01.625629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.356 [2024-11-06 14:08:01.625645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.356 [2024-11-06 14:08:01.633477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.356 [2024-11-06 14:08:01.634109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.356 [2024-11-06 14:08:01.634125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.615 [2024-11-06 14:08:01.641940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.615 [2024-11-06 14:08:01.642577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.615 [2024-11-06 14:08:01.642592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.615 [2024-11-06 14:08:01.650412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.615 [2024-11-06 14:08:01.651054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.616 [2024-11-06 14:08:01.651070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.616 [2024-11-06 14:08:01.658879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.616 [2024-11-06 14:08:01.659481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.616 [2024-11-06 14:08:01.659497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.616 [2024-11-06 14:08:01.667351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.616 [2024-11-06 14:08:01.667986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.616 [2024-11-06 14:08:01.668002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.616 [2024-11-06 14:08:01.675807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.616 [2024-11-06 14:08:01.676441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.616 [2024-11-06 14:08:01.676457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.616 [2024-11-06 14:08:01.684257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.616 [2024-11-06 14:08:01.684893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.616 [2024-11-06 14:08:01.684909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.616 [2024-11-06 14:08:01.692736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.616 [2024-11-06 14:08:01.693375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.616 [2024-11-06 14:08:01.693390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.616 [2024-11-06 14:08:01.701204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.616 [2024-11-06 14:08:01.701843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.616 [2024-11-06 14:08:01.701859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.616 [2024-11-06 14:08:01.709677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.616 [2024-11-06 14:08:01.710331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.616 [2024-11-06 14:08:01.710347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.616 [2024-11-06 14:08:01.718127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.616 [2024-11-06 14:08:01.718731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.616 [2024-11-06 14:08:01.718747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.616 [2024-11-06 14:08:01.726583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.616 [2024-11-06 14:08:01.727246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.616 [2024-11-06 14:08:01.727262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.616 [2024-11-06 14:08:01.735041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.616 [2024-11-06 14:08:01.735674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.616 [2024-11-06 14:08:01.735690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.616 [2024-11-06 14:08:01.743526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.616 [2024-11-06 14:08:01.744162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.616 [2024-11-06 14:08:01.744178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.616 [2024-11-06 14:08:01.751995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.616 [2024-11-06 14:08:01.752634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.616 [2024-11-06 14:08:01.752650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.616 [2024-11-06 14:08:01.760457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.616 [2024-11-06 14:08:01.761088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.616 [2024-11-06 14:08:01.761104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.616 [2024-11-06 14:08:01.768913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.616 [2024-11-06 14:08:01.769513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.616 [2024-11-06 14:08:01.769529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.616 [2024-11-06 14:08:01.777367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.616 [2024-11-06 14:08:01.778011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.616 [2024-11-06 14:08:01.778027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.616 [2024-11-06 14:08:01.785836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.616 [2024-11-06 14:08:01.786488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.616 [2024-11-06 14:08:01.786506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.616 [2024-11-06 14:08:01.794312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.616 [2024-11-06 14:08:01.794950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.616 [2024-11-06 14:08:01.794965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.616 [2024-11-06 14:08:01.802752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.616 [2024-11-06 14:08:01.803376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.616 [2024-11-06 14:08:01.803392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.616 [2024-11-06 14:08:01.811204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.616 [2024-11-06 14:08:01.811839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.616 [2024-11-06 14:08:01.811855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.616 [2024-11-06 14:08:01.819664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.616 [2024-11-06 14:08:01.820275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.616 [2024-11-06 14:08:01.820291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.616 [2024-11-06 14:08:01.828117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.616 [2024-11-06 14:08:01.828753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:18018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.616 [2024-11-06 14:08:01.828769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.616 [2024-11-06 14:08:01.836585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.616 [2024-11-06 14:08:01.837213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.616 [2024-11-06 14:08:01.837229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.616 [2024-11-06 14:08:01.845036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.616 [2024-11-06 14:08:01.845671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.616 [2024-11-06 14:08:01.845687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.616 [2024-11-06 14:08:01.853472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.616 [2024-11-06 14:08:01.854100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.616 [2024-11-06 14:08:01.854116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.616 [2024-11-06 14:08:01.861916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.616 [2024-11-06 14:08:01.862565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.616 [2024-11-06 14:08:01.862581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.617 [2024-11-06 14:08:01.870345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.617 [2024-11-06 14:08:01.870983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.617 [2024-11-06 14:08:01.870999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.617 [2024-11-06 14:08:01.878820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.617 [2024-11-06 14:08:01.879448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.617 [2024-11-06 14:08:01.879464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.617 [2024-11-06 14:08:01.887276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.617 [2024-11-06 14:08:01.887913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.617 [2024-11-06 14:08:01.887929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.617 [2024-11-06 14:08:01.895730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.617 [2024-11-06 14:08:01.896387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.617 [2024-11-06 14:08:01.896403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.876 [2024-11-06 14:08:01.904188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.876 [2024-11-06 14:08:01.904818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.876 [2024-11-06 14:08:01.904834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.876 [2024-11-06 14:08:01.912641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.876 [2024-11-06 14:08:01.913280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.876 [2024-11-06 14:08:01.913295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.876 [2024-11-06 14:08:01.921089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.876 [2024-11-06 14:08:01.921719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.876 [2024-11-06 14:08:01.921735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.876 [2024-11-06 14:08:01.929565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.876 [2024-11-06 14:08:01.930160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.876 [2024-11-06 14:08:01.930176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.876 [2024-11-06 14:08:01.938014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.876 [2024-11-06 14:08:01.938651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.876 [2024-11-06 14:08:01.938666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.876 [2024-11-06 14:08:01.946459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.876 [2024-11-06 14:08:01.947112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.876 [2024-11-06 14:08:01.947128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.876 [2024-11-06 14:08:01.954895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.876 [2024-11-06 14:08:01.955550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.876 [2024-11-06 14:08:01.955567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.876 [2024-11-06 14:08:01.963338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.876 [2024-11-06 14:08:01.963973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.876 [2024-11-06 14:08:01.963988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.876 [2024-11-06 14:08:01.971793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.876 [2024-11-06 14:08:01.972453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.876 [2024-11-06 14:08:01.972469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.876 [2024-11-06 14:08:01.980254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.876 [2024-11-06 14:08:01.980893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.876 [2024-11-06 14:08:01.980909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.876 [2024-11-06 14:08:01.988784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.876 [2024-11-06 14:08:01.989447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.876 [2024-11-06 14:08:01.989463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.876 [2024-11-06 14:08:01.997270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.876 [2024-11-06 14:08:01.997898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.876 [2024-11-06 14:08:01.997913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.876 [2024-11-06 14:08:02.005714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.876 [2024-11-06 14:08:02.006354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.876 [2024-11-06 14:08:02.006372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.876 [2024-11-06 14:08:02.014187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.876 [2024-11-06 14:08:02.014852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.876 [2024-11-06 14:08:02.014868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.877 [2024-11-06 14:08:02.022662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.877 [2024-11-06 14:08:02.023278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.877 [2024-11-06 14:08:02.023293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.877 [2024-11-06 14:08:02.031135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.877 [2024-11-06 14:08:02.031774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.877 [2024-11-06 14:08:02.031790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.877 [2024-11-06 14:08:02.039585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.877 [2024-11-06 14:08:02.040247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.877 [2024-11-06 14:08:02.040262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.877 [2024-11-06 14:08:02.048030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.877 [2024-11-06 14:08:02.048665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.877 [2024-11-06 14:08:02.048681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.877 [2024-11-06 14:08:02.056600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.877 [2024-11-06 14:08:02.057258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.877 [2024-11-06 14:08:02.057274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.877 [2024-11-06 14:08:02.065070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.877 [2024-11-06 14:08:02.065709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.877 [2024-11-06 14:08:02.065725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.877 [2024-11-06 14:08:02.073535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.877 [2024-11-06 14:08:02.074193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.877 [2024-11-06 14:08:02.074209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.877 [2024-11-06 14:08:02.081996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.877 [2024-11-06 14:08:02.082595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.877 [2024-11-06 14:08:02.082611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.877 30027.50 IOPS, 117.29 MiB/s [2024-11-06T13:08:02.161Z] [2024-11-06 14:08:02.090422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23549d0) with pdu=0x200016ee01f8 00:24:22.877 [2024-11-06 14:08:02.091063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.877 [2024-11-06 14:08:02.091078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.877 00:24:22.877 Latency(us) 00:24:22.877 [2024-11-06T13:08:02.161Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.877 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:22.877 nvme0n1 : 2.01 30031.29 117.31 0.00 0.00 4256.46 1979.73 15182.51 00:24:22.877 [2024-11-06T13:08:02.161Z] =================================================================================================================== 00:24:22.877 [2024-11-06T13:08:02.161Z] Total : 30031.29 117.31 0.00 0.00 4256.46 1979.73 15182.51 00:24:22.877 { 00:24:22.877 "results": [ 00:24:22.877 { 00:24:22.877 "job": "nvme0n1", 00:24:22.877 "core_mask": "0x2", 00:24:22.877 "workload": "randwrite", 00:24:22.877 "status": "finished", 00:24:22.877 "queue_depth": 128, 00:24:22.877 "io_size": 4096, 00:24:22.877 "runtime": 2.006141, 00:24:22.877 "iops": 30031.288927348578, 00:24:22.877 "mibps": 117.30972237245538, 00:24:22.877 "io_failed": 0, 00:24:22.877 "io_timeout": 0, 00:24:22.877 "avg_latency_us": 4256.455277773167, 00:24:22.877 "min_latency_us": 1979.7333333333333, 00:24:22.877 "max_latency_us": 15182.506666666666 00:24:22.877 } 00:24:22.877 ], 00:24:22.877 "core_count": 1 00:24:22.877 } 00:24:22.877 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:22.877 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:22.877 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:22.877 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:22.877 | .driver_specific 00:24:22.877 | .nvme_error 00:24:22.877 | .status_code 00:24:22.877 | .command_transient_transport_error' 00:24:23.136 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 236 > 0 )) 00:24:23.136 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1031429 00:24:23.136 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 1031429 ']' 00:24:23.136 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 1031429 00:24:23.136 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:24:23.136 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:23.136 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1031429 00:24:23.136 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:23.136 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:23.136 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1031429' 00:24:23.136 killing process with pid 1031429 00:24:23.136 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 1031429 00:24:23.136 Received shutdown signal, test time was about 2.000000 seconds 00:24:23.136 00:24:23.136 Latency(us) 00:24:23.136 [2024-11-06T13:08:02.420Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.136 [2024-11-06T13:08:02.420Z] =================================================================================================================== 00:24:23.136 [2024-11-06T13:08:02.420Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:23.136 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 1031429 00:24:23.395 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:24:23.395 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:23.395 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:23.395 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:23.395 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:23.395 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1032107 00:24:23.395 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1032107 /var/tmp/bperf.sock 00:24:23.395 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 1032107 ']' 00:24:23.395 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:23.395 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:23.395 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:23.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:23.395 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:23.395 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:23.395 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:24:23.395 [2024-11-06 14:08:02.457064] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:24:23.395 [2024-11-06 14:08:02.457118] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1032107 ] 00:24:23.395 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:23.396 Zero copy mechanism will not be used. 00:24:23.396 [2024-11-06 14:08:02.522402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.396 [2024-11-06 14:08:02.550711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.396 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:23.396 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:24:23.396 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:23.396 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:23.654 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:23.654 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.654 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:23.654 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.654 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:23.654 14:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:23.913 nvme0n1 00:24:23.913 14:08:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:23.913 14:08:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.913 14:08:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:23.913 14:08:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.913 14:08:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:23.913 14:08:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:24.173 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:24.173 Zero copy mechanism will not be used. 00:24:24.173 Running I/O for 2 seconds... 00:24:24.173 [2024-11-06 14:08:03.271369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.173 [2024-11-06 14:08:03.271699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.173 [2024-11-06 14:08:03.271727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.173 [2024-11-06 14:08:03.281536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.173 [2024-11-06 14:08:03.281974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.173 [2024-11-06 14:08:03.281995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.173 [2024-11-06 14:08:03.291336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.173 [2024-11-06 14:08:03.291710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.173 [2024-11-06 14:08:03.291729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.173 [2024-11-06 14:08:03.301326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.173 [2024-11-06 14:08:03.301515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.173 [2024-11-06 14:08:03.301532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.173 [2024-11-06 14:08:03.304591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.173 [2024-11-06 14:08:03.304778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.173 [2024-11-06 14:08:03.304795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.173 [2024-11-06 14:08:03.307531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.173 [2024-11-06 14:08:03.307716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.173 [2024-11-06 14:08:03.307737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.173 [2024-11-06 14:08:03.310444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.173 [2024-11-06 14:08:03.310632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.173 [2024-11-06 14:08:03.310648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.173 [2024-11-06 14:08:03.314163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.173 [2024-11-06 14:08:03.314354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.173 [2024-11-06 14:08:03.314371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.173 [2024-11-06 14:08:03.317529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.173 [2024-11-06 14:08:03.317714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.173 [2024-11-06 14:08:03.317730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.173 [2024-11-06 14:08:03.327321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.173 [2024-11-06 14:08:03.327686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.173 [2024-11-06 14:08:03.327703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.173 [2024-11-06 14:08:03.333210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.173 [2024-11-06 14:08:03.333398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.173 [2024-11-06 14:08:03.333414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.173 [2024-11-06 14:08:03.343297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.173 [2024-11-06 14:08:03.343501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.174 [2024-11-06 14:08:03.343517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.174 [2024-11-06 14:08:03.349481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.174 [2024-11-06 14:08:03.349666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.174 [2024-11-06 14:08:03.349682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.174 [2024-11-06 14:08:03.359325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.174 [2024-11-06 14:08:03.359547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.174 [2024-11-06 14:08:03.359563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.174 [2024-11-06 14:08:03.369238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.174 [2024-11-06 14:08:03.369440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.174 [2024-11-06 14:08:03.369457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.174 [2024-11-06 14:08:03.379519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.174 [2024-11-06 14:08:03.379701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.174 [2024-11-06 14:08:03.379717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.174 [2024-11-06 14:08:03.389383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.174 [2024-11-06 14:08:03.389599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.174 [2024-11-06 14:08:03.389615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.174 [2024-11-06 14:08:03.399191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.174 [2024-11-06 14:08:03.399371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.174 [2024-11-06 14:08:03.399387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.174 [2024-11-06 14:08:03.407704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.174 [2024-11-06 14:08:03.408037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.174 [2024-11-06 14:08:03.408055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.174 [2024-11-06 14:08:03.413020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.174 [2024-11-06 14:08:03.413193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.174 [2024-11-06 14:08:03.413210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.174 [2024-11-06 14:08:03.419601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.174 [2024-11-06 14:08:03.419965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.174 [2024-11-06 14:08:03.419982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.174 [2024-11-06 14:08:03.425867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.174 [2024-11-06 14:08:03.426047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.174 [2024-11-06 14:08:03.426064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.174 [2024-11-06 14:08:03.431135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.174 [2024-11-06 14:08:03.431319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.174 [2024-11-06 14:08:03.431335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.174 [2024-11-06 14:08:03.435870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.174 [2024-11-06 14:08:03.436056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.174 [2024-11-06 14:08:03.436072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.174 [2024-11-06 14:08:03.440944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.174 [2024-11-06 14:08:03.441117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.174 [2024-11-06 14:08:03.441133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.174 [2024-11-06 14:08:03.447940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.174 [2024-11-06 14:08:03.448114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.174 [2024-11-06 14:08:03.448130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.174 [2024-11-06 14:08:03.455439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.174 [2024-11-06 14:08:03.455645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.174 [2024-11-06 14:08:03.455661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.434 [2024-11-06 14:08:03.464071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.434 [2024-11-06 14:08:03.464466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.434 [2024-11-06 14:08:03.464484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.434 [2024-11-06 14:08:03.470515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.434 [2024-11-06 14:08:03.470688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.434 [2024-11-06 14:08:03.470704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.434 [2024-11-06 14:08:03.475491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.434 [2024-11-06 14:08:03.475695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.434 [2024-11-06 14:08:03.475711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.434 [2024-11-06 14:08:03.483072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.434 [2024-11-06 14:08:03.483258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.434 [2024-11-06 14:08:03.483274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.434 [2024-11-06 14:08:03.491679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.434 [2024-11-06 14:08:03.491823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.434 [2024-11-06 14:08:03.491843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.434 [2024-11-06 14:08:03.500171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.434 [2024-11-06 14:08:03.500418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.434 [2024-11-06 14:08:03.500434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.434 [2024-11-06 14:08:03.503987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.434 [2024-11-06 14:08:03.504112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.434 [2024-11-06 14:08:03.504128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.434 [2024-11-06 14:08:03.506496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.434 [2024-11-06 14:08:03.506619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.434 [2024-11-06 14:08:03.506634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.434 [2024-11-06 14:08:03.508943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.434 [2024-11-06 14:08:03.509068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.434 [2024-11-06 14:08:03.509083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.434 [2024-11-06 14:08:03.511422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.434 [2024-11-06 14:08:03.511545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.434 [2024-11-06 14:08:03.511560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.434 [2024-11-06 14:08:03.513999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.434 [2024-11-06 14:08:03.514122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.434 [2024-11-06 14:08:03.514137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.434 [2024-11-06 14:08:03.517344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.434 [2024-11-06 14:08:03.517469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.434 [2024-11-06 14:08:03.517485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.434 [2024-11-06 14:08:03.522125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.434 [2024-11-06 14:08:03.522259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.434 [2024-11-06 14:08:03.522274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.434 [2024-11-06 14:08:03.528091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.434 [2024-11-06 14:08:03.528219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.434 [2024-11-06 14:08:03.528235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.434 [2024-11-06 14:08:03.534639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.434 [2024-11-06 14:08:03.534820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.434 [2024-11-06 14:08:03.534836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.434 [2024-11-06 14:08:03.541263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.434 [2024-11-06 14:08:03.541387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.434 [2024-11-06 14:08:03.541403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.434 [2024-11-06 14:08:03.544827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.434 [2024-11-06 14:08:03.544953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.434 [2024-11-06 14:08:03.544968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.434 [2024-11-06 14:08:03.547336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.434 [2024-11-06 14:08:03.547461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.434 [2024-11-06 14:08:03.547476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.434 [2024-11-06 14:08:03.549816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.434 [2024-11-06 14:08:03.549939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.434 [2024-11-06 14:08:03.549954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.434 [2024-11-06 14:08:03.552282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.434 [2024-11-06 14:08:03.552405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.434 [2024-11-06 14:08:03.552420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.434 [2024-11-06 14:08:03.554820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.435 [2024-11-06 14:08:03.554942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.435 [2024-11-06 14:08:03.554958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.435 [2024-11-06 14:08:03.557745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.435 [2024-11-06 14:08:03.557929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.435 [2024-11-06 14:08:03.557945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.435 [2024-11-06 14:08:03.563376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.435 [2024-11-06 14:08:03.563542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.435 [2024-11-06 14:08:03.563558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.435 [2024-11-06 14:08:03.573222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.435 [2024-11-06 14:08:03.573465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.435 [2024-11-06 14:08:03.573482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.435 [2024-11-06 14:08:03.581650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.435 [2024-11-06 14:08:03.581851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.435 [2024-11-06 14:08:03.581866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.435 [2024-11-06 14:08:03.590053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.435 [2024-11-06 14:08:03.590269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.435 [2024-11-06 14:08:03.590284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.435 [2024-11-06 14:08:03.599260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.435 [2024-11-06 14:08:03.599607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.435 [2024-11-06 14:08:03.599623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.435 [2024-11-06 14:08:03.608503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.435 [2024-11-06 14:08:03.608683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.435 [2024-11-06 14:08:03.608698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.435 [2024-11-06 14:08:03.617447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.435 [2024-11-06 14:08:03.617684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.435 [2024-11-06 14:08:03.617699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.435 [2024-11-06 14:08:03.627340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.435 [2024-11-06 14:08:03.627518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.435 [2024-11-06 14:08:03.627533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.435 [2024-11-06 14:08:03.636755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.435 [2024-11-06 14:08:03.636950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.435 [2024-11-06 14:08:03.636968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.435 [2024-11-06 14:08:03.646554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.435 [2024-11-06 14:08:03.646733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.435 [2024-11-06 14:08:03.646749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.435 [2024-11-06 14:08:03.656452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.435 [2024-11-06 14:08:03.656596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.435 [2024-11-06 14:08:03.656611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.435 [2024-11-06 14:08:03.666392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.435 [2024-11-06 14:08:03.666657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.435 [2024-11-06 14:08:03.666672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.435 [2024-11-06 14:08:03.675387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.435 [2024-11-06 14:08:03.675628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.435 [2024-11-06 14:08:03.675643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.435 [2024-11-06 14:08:03.678789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.435 [2024-11-06 14:08:03.678913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.435 [2024-11-06 14:08:03.678929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.435 [2024-11-06 14:08:03.681347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.435 [2024-11-06 14:08:03.681470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.435 [2024-11-06 14:08:03.681486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.435 [2024-11-06 14:08:03.683945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.435 [2024-11-06 14:08:03.684069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.435 [2024-11-06 14:08:03.684085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.435 [2024-11-06 14:08:03.691468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.435 [2024-11-06 14:08:03.691660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.435 [2024-11-06 14:08:03.691675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.435 [2024-11-06 14:08:03.698815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.435 [2024-11-06 14:08:03.698938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.435 [2024-11-06 14:08:03.698954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.435 [2024-11-06 14:08:03.705234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.435 [2024-11-06 14:08:03.705433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.435 [2024-11-06 14:08:03.705448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.435 [2024-11-06 14:08:03.711654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.435 [2024-11-06 14:08:03.711786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.435 [2024-11-06 14:08:03.711801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.695 [2024-11-06 14:08:03.719874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.695 [2024-11-06 14:08:03.720030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.695 [2024-11-06 14:08:03.720045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.695 [2024-11-06 14:08:03.727925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.695 [2024-11-06 14:08:03.728059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.695 [2024-11-06 14:08:03.728074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.695 [2024-11-06 14:08:03.732877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.695 [2024-11-06 14:08:03.733012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.695 [2024-11-06 14:08:03.733027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.695 [2024-11-06 14:08:03.741209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.695 [2024-11-06 14:08:03.741349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.695 [2024-11-06 14:08:03.741365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.695 [2024-11-06 14:08:03.745360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.695 [2024-11-06 14:08:03.745495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.695 [2024-11-06 14:08:03.745510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.695 [2024-11-06 14:08:03.750292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.695 [2024-11-06 14:08:03.750426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.695 [2024-11-06 14:08:03.750444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.695 [2024-11-06 14:08:03.756336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.695 [2024-11-06 14:08:03.756568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.695 [2024-11-06 14:08:03.756583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.695 [2024-11-06 14:08:03.765281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.695 [2024-11-06 14:08:03.765439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.695 [2024-11-06 14:08:03.765455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.695 [2024-11-06 14:08:03.773491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.695 [2024-11-06 14:08:03.773679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.695 [2024-11-06 14:08:03.773694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.695 [2024-11-06 14:08:03.780286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.695 [2024-11-06 14:08:03.780421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.695 [2024-11-06 14:08:03.780436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.695 [2024-11-06 14:08:03.785922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.695 [2024-11-06 14:08:03.786058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.695 [2024-11-06 14:08:03.786074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.695 [2024-11-06 14:08:03.789053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.695 [2024-11-06 14:08:03.789189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.695 [2024-11-06 14:08:03.789204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.695 [2024-11-06 14:08:03.791695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.695 [2024-11-06 14:08:03.791831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.695 [2024-11-06 14:08:03.791847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.695 [2024-11-06 14:08:03.799332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.695 [2024-11-06 14:08:03.799466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.695 [2024-11-06 14:08:03.799483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.695 [2024-11-06 14:08:03.803439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.695 [2024-11-06 14:08:03.803578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.695 [2024-11-06 14:08:03.803594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.695 [2024-11-06 14:08:03.805942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.695 [2024-11-06 14:08:03.806078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.695 [2024-11-06 14:08:03.806094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.695 [2024-11-06 14:08:03.808430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.695 [2024-11-06 14:08:03.808566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.695 [2024-11-06 14:08:03.808582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.695 [2024-11-06 14:08:03.811000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.695 [2024-11-06 14:08:03.811136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.695 [2024-11-06 14:08:03.811151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.695 [2024-11-06 14:08:03.814255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.695 [2024-11-06 14:08:03.814395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.695 [2024-11-06 14:08:03.814410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.695 [2024-11-06 14:08:03.819292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.695 [2024-11-06 14:08:03.819425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.695 [2024-11-06 14:08:03.819441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.695 [2024-11-06 14:08:03.824024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.695 [2024-11-06 14:08:03.824158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.695 [2024-11-06 14:08:03.824173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.695 [2024-11-06 14:08:03.830188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.695 [2024-11-06 14:08:03.830326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.695 [2024-11-06 14:08:03.830342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.695 [2024-11-06 14:08:03.837550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.695 [2024-11-06 14:08:03.837728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.695 [2024-11-06 14:08:03.837743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.695 [2024-11-06 14:08:03.846237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.695 [2024-11-06 14:08:03.846445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.695 [2024-11-06 14:08:03.846460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.695 [2024-11-06 14:08:03.855720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.695 [2024-11-06 14:08:03.855877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.695 [2024-11-06 14:08:03.855893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.695 [2024-11-06 14:08:03.865210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.695 [2024-11-06 14:08:03.865404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.695 [2024-11-06 14:08:03.865420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.696 [2024-11-06 14:08:03.874727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.696 [2024-11-06 14:08:03.874893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.696 [2024-11-06 14:08:03.874908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.696 [2024-11-06 14:08:03.884046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.696 [2024-11-06 14:08:03.884237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.696 [2024-11-06 14:08:03.884257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.696 [2024-11-06 14:08:03.893841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.696 [2024-11-06 14:08:03.894046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.696 [2024-11-06 14:08:03.894061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.696 [2024-11-06 14:08:03.903346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.696 [2024-11-06 14:08:03.903562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.696 [2024-11-06 14:08:03.903577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.696 [2024-11-06 14:08:03.913290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.696 [2024-11-06 14:08:03.913470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.696 [2024-11-06 14:08:03.913485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.696 [2024-11-06 14:08:03.922264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.696 [2024-11-06 14:08:03.922486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.696 [2024-11-06 14:08:03.922505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.696 [2024-11-06 14:08:03.930969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.696 [2024-11-06 14:08:03.931067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.696 [2024-11-06 14:08:03.931081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.696 [2024-11-06 14:08:03.939176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.696 [2024-11-06 14:08:03.939497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.696 [2024-11-06 14:08:03.939513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.696 [2024-11-06 14:08:03.948837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.696 [2024-11-06 14:08:03.948940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.696 [2024-11-06 14:08:03.948955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.696 [2024-11-06 14:08:03.958488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.696 [2024-11-06 14:08:03.958661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.696 [2024-11-06 14:08:03.958675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.696 [2024-11-06 14:08:03.968058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.696 [2024-11-06 14:08:03.968238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.696 [2024-11-06 14:08:03.968258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.696 [2024-11-06 14:08:03.977472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.696 [2024-11-06 14:08:03.977667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.696 [2024-11-06 14:08:03.977682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.956 [2024-11-06 14:08:03.986444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.956 [2024-11-06 14:08:03.986663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.956 [2024-11-06 14:08:03.986678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.956 [2024-11-06 14:08:03.995903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.956 [2024-11-06 14:08:03.996114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.956 [2024-11-06 14:08:03.996129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.956 [2024-11-06 14:08:04.005685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.956 [2024-11-06 14:08:04.005866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.956 [2024-11-06 14:08:04.005881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.956 [2024-11-06 14:08:04.015186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.956 [2024-11-06 14:08:04.015455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.956 [2024-11-06 14:08:04.015472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.956 [2024-11-06 14:08:04.025988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.956 [2024-11-06 14:08:04.026054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.956 [2024-11-06 14:08:04.026069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.956 [2024-11-06 14:08:04.032769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.957 [2024-11-06 14:08:04.032823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.957 [2024-11-06 14:08:04.032838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.957 [2024-11-06 14:08:04.035469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.957 [2024-11-06 14:08:04.035510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.957 [2024-11-06 14:08:04.035525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.957 [2024-11-06 14:08:04.037932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.957 [2024-11-06 14:08:04.037978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.957 [2024-11-06 14:08:04.037993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.957 [2024-11-06 14:08:04.040496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.957 [2024-11-06 14:08:04.040538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.957 [2024-11-06 14:08:04.040553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.957 [2024-11-06 14:08:04.042977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.957 [2024-11-06 14:08:04.043026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.957 [2024-11-06 14:08:04.043041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.957 [2024-11-06 14:08:04.045580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.957 [2024-11-06 14:08:04.045631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.957 [2024-11-06 14:08:04.045646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.957 [2024-11-06 14:08:04.051104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.957 [2024-11-06 14:08:04.051350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.957 [2024-11-06 14:08:04.051365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.957 [2024-11-06 14:08:04.059960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.957 [2024-11-06 14:08:04.060187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.957 [2024-11-06 14:08:04.060202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.957 [2024-11-06 14:08:04.069526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.957 [2024-11-06 14:08:04.069766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.957 [2024-11-06 14:08:04.069782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.957 [2024-11-06 14:08:04.078576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.957 [2024-11-06 14:08:04.078753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.957 [2024-11-06 14:08:04.078768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.957 [2024-11-06 14:08:04.088343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.957 [2024-11-06 14:08:04.088579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.957 [2024-11-06 14:08:04.088594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.957 [2024-11-06 14:08:04.098457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.957 [2024-11-06 14:08:04.098638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.957 [2024-11-06 14:08:04.098653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.957 [2024-11-06 14:08:04.107882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.957 [2024-11-06 14:08:04.107972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.957 [2024-11-06 14:08:04.107987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.957 [2024-11-06 14:08:04.115699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.957 [2024-11-06 14:08:04.115875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.957 [2024-11-06 14:08:04.115891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.957 [2024-11-06 14:08:04.124940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.957 [2024-11-06 14:08:04.125173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.957 [2024-11-06 14:08:04.125191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.957 [2024-11-06 14:08:04.131869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.957 [2024-11-06 14:08:04.131944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.957 [2024-11-06 14:08:04.131960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.957 [2024-11-06 14:08:04.136480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.957 [2024-11-06 14:08:04.136530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.957 [2024-11-06 14:08:04.136546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.957 [2024-11-06 14:08:04.141011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.957 [2024-11-06 14:08:04.141067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.957 [2024-11-06 14:08:04.141082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.957 [2024-11-06 14:08:04.145857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.957 [2024-11-06 14:08:04.145897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.957 [2024-11-06 14:08:04.145912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.957 [2024-11-06 14:08:04.151901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.957 [2024-11-06 14:08:04.152006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.957 [2024-11-06 14:08:04.152021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.957 [2024-11-06 14:08:04.159175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.957 [2024-11-06 14:08:04.159214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.957 [2024-11-06 14:08:04.159229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.957 [2024-11-06 14:08:04.163057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.957 [2024-11-06 14:08:04.163107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.957 [2024-11-06 14:08:04.163123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.957 [2024-11-06 14:08:04.165615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.957 [2024-11-06 14:08:04.165664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.957 [2024-11-06 14:08:04.165680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.957 [2024-11-06 14:08:04.168174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.957 [2024-11-06 14:08:04.168236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.957 [2024-11-06 14:08:04.168257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.957 [2024-11-06 14:08:04.171047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.957 [2024-11-06 14:08:04.171102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.957 [2024-11-06 14:08:04.171117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.957 [2024-11-06 14:08:04.174587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.957 [2024-11-06 14:08:04.174644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.957 [2024-11-06 14:08:04.174659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.957 [2024-11-06 14:08:04.177653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.957 [2024-11-06 14:08:04.177716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.957 [2024-11-06 14:08:04.177730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.957 [2024-11-06 14:08:04.180266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.957 [2024-11-06 14:08:04.180325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.958 [2024-11-06 14:08:04.180339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.958 [2024-11-06 14:08:04.185852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.958 [2024-11-06 14:08:04.185892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.958 [2024-11-06 14:08:04.185907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.958 [2024-11-06 14:08:04.189876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.958 [2024-11-06 14:08:04.189938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.958 [2024-11-06 14:08:04.189954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.958 [2024-11-06 14:08:04.194805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.958 [2024-11-06 14:08:04.194844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.958 [2024-11-06 14:08:04.194859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.958 [2024-11-06 14:08:04.203721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.958 [2024-11-06 14:08:04.203962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.958 [2024-11-06 14:08:04.203978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.958 [2024-11-06 14:08:04.212667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.958 [2024-11-06 14:08:04.212935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.958 [2024-11-06 14:08:04.212951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.958 [2024-11-06 14:08:04.219119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.958 [2024-11-06 14:08:04.219370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.958 [2024-11-06 14:08:04.219385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.958 [2024-11-06 14:08:04.226556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.958 [2024-11-06 14:08:04.226775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.958 [2024-11-06 14:08:04.226790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.958 [2024-11-06 14:08:04.233499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:24.958 [2024-11-06 14:08:04.233539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.958 [2024-11-06 14:08:04.233555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.218 [2024-11-06 14:08:04.239432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.218 [2024-11-06 14:08:04.239473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.218 [2024-11-06 14:08:04.239487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.218 [2024-11-06 14:08:04.246918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.218 [2024-11-06 14:08:04.247110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.218 [2024-11-06 14:08:04.247125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.218 [2024-11-06 14:08:04.254364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.218 [2024-11-06 14:08:04.254435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.218 [2024-11-06 14:08:04.254451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.218 4696.00 IOPS, 587.00 MiB/s [2024-11-06T13:08:04.502Z] [2024-11-06 14:08:04.261919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.218 [2024-11-06 14:08:04.261957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.218 [2024-11-06 14:08:04.261973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.218 [2024-11-06 14:08:04.266763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.218 [2024-11-06 14:08:04.266806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.218 [2024-11-06 14:08:04.266822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.218 [2024-11-06 14:08:04.271807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.218 [2024-11-06 14:08:04.271877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.218 [2024-11-06 14:08:04.271892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.218 [2024-11-06 14:08:04.277979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.218 [2024-11-06 14:08:04.278020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.218 [2024-11-06 14:08:04.278035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.218 [2024-11-06 14:08:04.283114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.218 [2024-11-06 14:08:04.283154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.218 [2024-11-06 14:08:04.283169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.218 [2024-11-06 14:08:04.287893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.218 [2024-11-06 14:08:04.287934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.218 [2024-11-06 14:08:04.287949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.218 [2024-11-06 14:08:04.293138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.218 [2024-11-06 14:08:04.293186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.218 [2024-11-06 14:08:04.293201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.218 [2024-11-06 14:08:04.299879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.218 [2024-11-06 14:08:04.300108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.218 [2024-11-06 14:08:04.300125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.218 [2024-11-06 14:08:04.307673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.218 [2024-11-06 14:08:04.307711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.218 [2024-11-06 14:08:04.307726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.218 [2024-11-06 14:08:04.312774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.218 [2024-11-06 14:08:04.312813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.218 [2024-11-06 14:08:04.312828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.218 [2024-11-06 14:08:04.317475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.218 [2024-11-06 14:08:04.317516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.218 [2024-11-06 14:08:04.317531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.218 [2024-11-06 14:08:04.321381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.218 [2024-11-06 14:08:04.321420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.218 [2024-11-06 14:08:04.321435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.218 [2024-11-06 14:08:04.323951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.218 [2024-11-06 14:08:04.323992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.218 [2024-11-06 14:08:04.324007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.218 [2024-11-06 14:08:04.326563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.218 [2024-11-06 14:08:04.326605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.218 [2024-11-06 14:08:04.326621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.218 [2024-11-06 14:08:04.331021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.218 [2024-11-06 14:08:04.331061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.218 [2024-11-06 14:08:04.331076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.218 [2024-11-06 14:08:04.334902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.218 [2024-11-06 14:08:04.334944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.218 [2024-11-06 14:08:04.334959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.218 [2024-11-06 14:08:04.337353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.219 [2024-11-06 14:08:04.337397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.219 [2024-11-06 14:08:04.337413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.219 [2024-11-06 14:08:04.339827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.219 [2024-11-06 14:08:04.339871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.219 [2024-11-06 14:08:04.339885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.219 [2024-11-06 14:08:04.342280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.219 [2024-11-06 14:08:04.342320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.219 [2024-11-06 14:08:04.342341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.219 [2024-11-06 14:08:04.344750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.219 [2024-11-06 14:08:04.344795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.219 [2024-11-06 14:08:04.344811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.219 [2024-11-06 14:08:04.347187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.219 [2024-11-06 14:08:04.347240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.219 [2024-11-06 14:08:04.347260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.219 [2024-11-06 14:08:04.349651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.219 [2024-11-06 14:08:04.349743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.219 [2024-11-06 14:08:04.349758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.219 [2024-11-06 14:08:04.352544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.219 [2024-11-06 14:08:04.352623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.219 [2024-11-06 14:08:04.352638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.219 [2024-11-06 14:08:04.355641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.219 [2024-11-06 14:08:04.355743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.219 [2024-11-06 14:08:04.355759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.219 [2024-11-06 14:08:04.358879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.219 [2024-11-06 14:08:04.358959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.219 [2024-11-06 14:08:04.358974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.219 [2024-11-06 14:08:04.361747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.219 [2024-11-06 14:08:04.361835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.219 [2024-11-06 14:08:04.361850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.219 [2024-11-06 14:08:04.364633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.219 [2024-11-06 14:08:04.364711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.219 [2024-11-06 14:08:04.364725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.219 [2024-11-06 14:08:04.367498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.219 [2024-11-06 14:08:04.367582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.219 [2024-11-06 14:08:04.367597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.219 [2024-11-06 14:08:04.370370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.219 [2024-11-06 14:08:04.370455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.219 [2024-11-06 14:08:04.370470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.219 [2024-11-06 14:08:04.373141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.219 [2024-11-06 14:08:04.373205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.219 [2024-11-06 14:08:04.373220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.219 [2024-11-06 14:08:04.375976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.219 [2024-11-06 14:08:04.376078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.219 [2024-11-06 14:08:04.376092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.219 [2024-11-06 14:08:04.378799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.219 [2024-11-06 14:08:04.378894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.219 [2024-11-06 14:08:04.378909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.219 [2024-11-06 14:08:04.381620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.219 [2024-11-06 14:08:04.381703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.219 [2024-11-06 14:08:04.381718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.219 [2024-11-06 14:08:04.384414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.219 [2024-11-06 14:08:04.384519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.219 [2024-11-06 14:08:04.384534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.219 [2024-11-06 14:08:04.387256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.219 [2024-11-06 14:08:04.387331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.219 [2024-11-06 14:08:04.387345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.219 [2024-11-06 14:08:04.391261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.219 [2024-11-06 14:08:04.391332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.219 [2024-11-06 14:08:04.391347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.219 [2024-11-06 14:08:04.397657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.219 [2024-11-06 14:08:04.397706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.219 [2024-11-06 14:08:04.397721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.219 [2024-11-06 14:08:04.403058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.219 [2024-11-06 14:08:04.403097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.219 [2024-11-06 14:08:04.403112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.219 [2024-11-06 14:08:04.408967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.219 [2024-11-06 14:08:04.409007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.219 [2024-11-06 14:08:04.409022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.219 [2024-11-06 14:08:04.413483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.219 [2024-11-06 14:08:04.413540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.219 [2024-11-06 14:08:04.413555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.219 [2024-11-06 14:08:04.418131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.219 [2024-11-06 14:08:04.418169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.219 [2024-11-06 14:08:04.418184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.219 [2024-11-06 14:08:04.424338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.219 [2024-11-06 14:08:04.424379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.219 [2024-11-06 14:08:04.424394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.219 [2024-11-06 14:08:04.428449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.219 [2024-11-06 14:08:04.428528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.219 [2024-11-06 14:08:04.428543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.220 [2024-11-06 14:08:04.433640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.220 [2024-11-06 14:08:04.433874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.220 [2024-11-06 14:08:04.433890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.220 [2024-11-06 14:08:04.440436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.220 [2024-11-06 14:08:04.440489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.220 [2024-11-06 14:08:04.440507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.220 [2024-11-06 14:08:04.447755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.220 [2024-11-06 14:08:04.447999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.220 [2024-11-06 14:08:04.448014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.220 [2024-11-06 14:08:04.452307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.220 [2024-11-06 14:08:04.452353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.220 [2024-11-06 14:08:04.452369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.220 [2024-11-06 14:08:04.454774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.220 [2024-11-06 14:08:04.454819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.220 [2024-11-06 14:08:04.454834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.220 [2024-11-06 14:08:04.457311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.220 [2024-11-06 14:08:04.457392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.220 [2024-11-06 14:08:04.457407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.220 [2024-11-06 14:08:04.460066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.220 [2024-11-06 14:08:04.460137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.220 [2024-11-06 14:08:04.460152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.220 [2024-11-06 14:08:04.468057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.220 [2024-11-06 14:08:04.468259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.220 [2024-11-06 14:08:04.468275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.220 [2024-11-06 14:08:04.477110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.220 [2024-11-06 14:08:04.477320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.220 [2024-11-06 14:08:04.477335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.220 [2024-11-06 14:08:04.486482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.220 [2024-11-06 14:08:04.486682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.220 [2024-11-06 14:08:04.486698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.220 [2024-11-06 14:08:04.496073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.220 [2024-11-06 14:08:04.496270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.220 [2024-11-06 14:08:04.496285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.480 [2024-11-06 14:08:04.505148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.480 [2024-11-06 14:08:04.505340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.480 [2024-11-06 14:08:04.505355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.480 [2024-11-06 14:08:04.514748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.480 [2024-11-06 14:08:04.514960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.480 [2024-11-06 14:08:04.514975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.480 [2024-11-06 14:08:04.523930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.480 [2024-11-06 14:08:04.524097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.480 [2024-11-06 14:08:04.524112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.480 [2024-11-06 14:08:04.531560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.480 [2024-11-06 14:08:04.531609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.480 [2024-11-06 14:08:04.531624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.480 [2024-11-06 14:08:04.534048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.480 [2024-11-06 14:08:04.534103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.480 [2024-11-06 14:08:04.534119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.480 [2024-11-06 14:08:04.536613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.480 [2024-11-06 14:08:04.536666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.480 [2024-11-06 14:08:04.536681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.480 [2024-11-06 14:08:04.539218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.480 [2024-11-06 14:08:04.539286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.480 [2024-11-06 14:08:04.539301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.480 [2024-11-06 14:08:04.541696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.481 [2024-11-06 14:08:04.541735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.481 [2024-11-06 14:08:04.541749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.481 [2024-11-06 14:08:04.544146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.481 [2024-11-06 14:08:04.544185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.481 [2024-11-06 14:08:04.544200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.481 [2024-11-06 14:08:04.546622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.481 [2024-11-06 14:08:04.546662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.481 [2024-11-06 14:08:04.546677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.481 [2024-11-06 14:08:04.549049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.481 [2024-11-06 14:08:04.549090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.481 [2024-11-06 14:08:04.549104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.481 [2024-11-06 14:08:04.551529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.481 [2024-11-06 14:08:04.551575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.481 [2024-11-06 14:08:04.551590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.481 [2024-11-06 14:08:04.553980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.481 [2024-11-06 14:08:04.554022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.481 [2024-11-06 14:08:04.554037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.481 [2024-11-06 14:08:04.556446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.481 [2024-11-06 14:08:04.556487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.481 [2024-11-06 14:08:04.556502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.481 [2024-11-06 14:08:04.558895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.481 [2024-11-06 14:08:04.558941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.481 [2024-11-06 14:08:04.558956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.481 [2024-11-06 14:08:04.561383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.481 [2024-11-06 14:08:04.561429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.481 [2024-11-06 14:08:04.561444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.481 [2024-11-06 14:08:04.563824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.481 [2024-11-06 14:08:04.563865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.481 [2024-11-06 14:08:04.563883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.481 [2024-11-06 14:08:04.566270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.481 [2024-11-06 14:08:04.566315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.481 [2024-11-06 14:08:04.566330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.481 [2024-11-06 14:08:04.568746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.481 [2024-11-06 14:08:04.568791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.481 [2024-11-06 14:08:04.568806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.481 [2024-11-06 14:08:04.571324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.481 [2024-11-06 14:08:04.571385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.481 [2024-11-06 14:08:04.571400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.481 [2024-11-06 14:08:04.574257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.481 [2024-11-06 14:08:04.574488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.481 [2024-11-06 14:08:04.574503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.481 [2024-11-06 14:08:04.583860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.481 [2024-11-06 14:08:04.584095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.481 [2024-11-06 14:08:04.584111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.481 [2024-11-06 14:08:04.593104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.481 [2024-11-06 14:08:04.593309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.481 [2024-11-06 14:08:04.593324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.481 [2024-11-06 14:08:04.602618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.481 [2024-11-06 14:08:04.602793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.481 [2024-11-06 14:08:04.602808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.481 [2024-11-06 14:08:04.613312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.481 [2024-11-06 14:08:04.613507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.481 [2024-11-06 14:08:04.613522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.481 [2024-11-06 14:08:04.621257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.481 [2024-11-06 14:08:04.621466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.481 [2024-11-06 14:08:04.621481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.481 [2024-11-06 14:08:04.625178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.481 [2024-11-06 14:08:04.625227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.481 [2024-11-06 14:08:04.625242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.481 [2024-11-06 14:08:04.627655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.481 [2024-11-06 14:08:04.627703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.481 [2024-11-06 14:08:04.627718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.481 [2024-11-06 14:08:04.630105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.481 [2024-11-06 14:08:04.630148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.481 [2024-11-06 14:08:04.630162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.481 [2024-11-06 14:08:04.633480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.481 [2024-11-06 14:08:04.633553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.481 [2024-11-06 14:08:04.633568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.481 [2024-11-06 14:08:04.637614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.481 [2024-11-06 14:08:04.637658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.481 [2024-11-06 14:08:04.637673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.481 [2024-11-06 14:08:04.640522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.481 [2024-11-06 14:08:04.640562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.481 [2024-11-06 14:08:04.640577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.481 [2024-11-06 14:08:04.642972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.481 [2024-11-06 14:08:04.643016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.481 [2024-11-06 14:08:04.643031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.481 [2024-11-06 14:08:04.645506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.481 [2024-11-06 14:08:04.645544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.481 [2024-11-06 14:08:04.645559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.481 [2024-11-06 14:08:04.648647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.482 [2024-11-06 14:08:04.648686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.482 [2024-11-06 14:08:04.648700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.482 [2024-11-06 14:08:04.653464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.482 [2024-11-06 14:08:04.653501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.482 [2024-11-06 14:08:04.653517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.482 [2024-11-06 14:08:04.657409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.482 [2024-11-06 14:08:04.657451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.482 [2024-11-06 14:08:04.657466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.482 [2024-11-06 14:08:04.661446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.482 [2024-11-06 14:08:04.661575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.482 [2024-11-06 14:08:04.661590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.482 [2024-11-06 14:08:04.667970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.482 [2024-11-06 14:08:04.668028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.482 [2024-11-06 14:08:04.668043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.482 [2024-11-06 14:08:04.672091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.482 [2024-11-06 14:08:04.672131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.482 [2024-11-06 14:08:04.672146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.482 [2024-11-06 14:08:04.677512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.482 [2024-11-06 14:08:04.677718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.482 [2024-11-06 14:08:04.677733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.482 [2024-11-06 14:08:04.683570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.482 [2024-11-06 14:08:04.683635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.482 [2024-11-06 14:08:04.683649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.482 [2024-11-06 14:08:04.687790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.482 [2024-11-06 14:08:04.687830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.482 [2024-11-06 14:08:04.687848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.482 [2024-11-06 14:08:04.695991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.482 [2024-11-06 14:08:04.696143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.482 [2024-11-06 14:08:04.696158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.482 [2024-11-06 14:08:04.702428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.482 [2024-11-06 14:08:04.702502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.482 [2024-11-06 14:08:04.702517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.482 [2024-11-06 14:08:04.711543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.482 [2024-11-06 14:08:04.711808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.482 [2024-11-06 14:08:04.711823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.482 [2024-11-06 14:08:04.721031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.482 [2024-11-06 14:08:04.721273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.482 [2024-11-06 14:08:04.721288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.482 [2024-11-06 14:08:04.730958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.482 [2024-11-06 14:08:04.731148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.482 [2024-11-06 14:08:04.731163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.482 [2024-11-06 14:08:04.739586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.482 [2024-11-06 14:08:04.739848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.482 [2024-11-06 14:08:04.739863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.482 [2024-11-06 14:08:04.749729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.482 [2024-11-06 14:08:04.749942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.482 [2024-11-06 14:08:04.749957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.482 [2024-11-06 14:08:04.759016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.482 [2024-11-06 14:08:04.759207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.482 [2024-11-06 14:08:04.759222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.742 [2024-11-06 14:08:04.768123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.742 [2024-11-06 14:08:04.768220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.742 [2024-11-06 14:08:04.768236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.742 [2024-11-06 14:08:04.776212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.742 [2024-11-06 14:08:04.776417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.742 [2024-11-06 14:08:04.776432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.742 [2024-11-06 14:08:04.785735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.742 [2024-11-06 14:08:04.785937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.742 [2024-11-06 14:08:04.785952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.742 [2024-11-06 14:08:04.793022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.742 [2024-11-06 14:08:04.793063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.742 [2024-11-06 14:08:04.793078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.742 [2024-11-06 14:08:04.797675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.742 [2024-11-06 14:08:04.797715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.742 [2024-11-06 14:08:04.797730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.742 [2024-11-06 14:08:04.802885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.742 [2024-11-06 14:08:04.802945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.742 [2024-11-06 14:08:04.802960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.742 [2024-11-06 14:08:04.809285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.742 [2024-11-06 14:08:04.809491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.742 [2024-11-06 14:08:04.809507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.742 [2024-11-06 14:08:04.817034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.742 [2024-11-06 14:08:04.817073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.742 [2024-11-06 14:08:04.817088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.742 [2024-11-06 14:08:04.824997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.742 [2024-11-06 14:08:04.825202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.742 [2024-11-06 14:08:04.825216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.742 [2024-11-06 14:08:04.831397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.742 [2024-11-06 14:08:04.831437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.742 [2024-11-06 14:08:04.831452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.742 [2024-11-06 14:08:04.839550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.742 [2024-11-06 14:08:04.839590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.742 [2024-11-06 14:08:04.839605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.742 [2024-11-06 14:08:04.842365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.742 [2024-11-06 14:08:04.842408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.742 [2024-11-06 14:08:04.842423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.742 [2024-11-06 14:08:04.844857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.742 [2024-11-06 14:08:04.844900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.742 [2024-11-06 14:08:04.844916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.742 [2024-11-06 14:08:04.847347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.742 [2024-11-06 14:08:04.847385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.742 [2024-11-06 14:08:04.847400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.742 [2024-11-06 14:08:04.849842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.742 [2024-11-06 14:08:04.849886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.742 [2024-11-06 14:08:04.849901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.742 [2024-11-06 14:08:04.852336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.742 [2024-11-06 14:08:04.852376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.742 [2024-11-06 14:08:04.852391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.742 [2024-11-06 14:08:04.854824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.742 [2024-11-06 14:08:04.854873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.742 [2024-11-06 14:08:04.854888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.742 [2024-11-06 14:08:04.857330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.743 [2024-11-06 14:08:04.857377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.743 [2024-11-06 14:08:04.857395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.743 [2024-11-06 14:08:04.859822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.743 [2024-11-06 14:08:04.859861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.743 [2024-11-06 14:08:04.859876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.743 [2024-11-06 14:08:04.862313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.743 [2024-11-06 14:08:04.862356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.743 [2024-11-06 14:08:04.862372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.743 [2024-11-06 14:08:04.864788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.743 [2024-11-06 14:08:04.864828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.743 [2024-11-06 14:08:04.864844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.743 [2024-11-06 14:08:04.867370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.743 [2024-11-06 14:08:04.867434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.743 [2024-11-06 14:08:04.867449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.743 [2024-11-06 14:08:04.870019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.743 [2024-11-06 14:08:04.870116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.743 [2024-11-06 14:08:04.870131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.743 [2024-11-06 14:08:04.877935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.743 [2024-11-06 14:08:04.878002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.743 [2024-11-06 14:08:04.878017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.743 [2024-11-06 14:08:04.887061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.743 [2024-11-06 14:08:04.887271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.743 [2024-11-06 14:08:04.887285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.743 [2024-11-06 14:08:04.896674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.743 [2024-11-06 14:08:04.896902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.743 [2024-11-06 14:08:04.896917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.743 [2024-11-06 14:08:04.905736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.743 [2024-11-06 14:08:04.905968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.743 [2024-11-06 14:08:04.905989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.743 [2024-11-06 14:08:04.914725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.743 [2024-11-06 14:08:04.914950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.743 [2024-11-06 14:08:04.914965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.743 [2024-11-06 14:08:04.924672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.743 [2024-11-06 14:08:04.924887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.743 [2024-11-06 14:08:04.924902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.743 [2024-11-06 14:08:04.933804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.743 [2024-11-06 14:08:04.934028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.743 [2024-11-06 14:08:04.934043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.743 [2024-11-06 14:08:04.943150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.743 [2024-11-06 14:08:04.943404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.743 [2024-11-06 14:08:04.943419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.743 [2024-11-06 14:08:04.952626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.743 [2024-11-06 14:08:04.952865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.743 [2024-11-06 14:08:04.952880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.743 [2024-11-06 14:08:04.962789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.743 [2024-11-06 14:08:04.962997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.743 [2024-11-06 14:08:04.963013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.743 [2024-11-06 14:08:04.972106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.743 [2024-11-06 14:08:04.972284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.743 [2024-11-06 14:08:04.972299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.743 [2024-11-06 14:08:04.982130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.743 [2024-11-06 14:08:04.982360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.743 [2024-11-06 14:08:04.982375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.743 [2024-11-06 14:08:04.991525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.743 [2024-11-06 14:08:04.991717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.743 [2024-11-06 14:08:04.991731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.743 [2024-11-06 14:08:05.000738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.743 [2024-11-06 14:08:05.000974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.743 [2024-11-06 14:08:05.000989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.743 [2024-11-06 14:08:05.010906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.743 [2024-11-06 14:08:05.011114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.743 [2024-11-06 14:08:05.011129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.743 [2024-11-06 14:08:05.019496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:25.743 [2024-11-06 14:08:05.019679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.743 [2024-11-06 14:08:05.019694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.003 [2024-11-06 14:08:05.028848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.003 [2024-11-06 14:08:05.029070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.003 [2024-11-06 14:08:05.029085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.003 [2024-11-06 14:08:05.032278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.003 [2024-11-06 14:08:05.032371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.003 [2024-11-06 14:08:05.032386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.003 [2024-11-06 14:08:05.036781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.003 [2024-11-06 14:08:05.037008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.003 [2024-11-06 14:08:05.037023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.003 [2024-11-06 14:08:05.041903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.003 [2024-11-06 14:08:05.042037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.003 [2024-11-06 14:08:05.042052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.003 [2024-11-06 14:08:05.044551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.003 [2024-11-06 14:08:05.044652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.003 [2024-11-06 14:08:05.044671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.003 [2024-11-06 14:08:05.047125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.003 [2024-11-06 14:08:05.047208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.003 [2024-11-06 14:08:05.047223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.003 [2024-11-06 14:08:05.049855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.003 [2024-11-06 14:08:05.049969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.003 [2024-11-06 14:08:05.049984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.003 [2024-11-06 14:08:05.052760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.003 [2024-11-06 14:08:05.052831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.003 [2024-11-06 14:08:05.052846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.003 [2024-11-06 14:08:05.058011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.003 [2024-11-06 14:08:05.058221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.003 [2024-11-06 14:08:05.058236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.003 [2024-11-06 14:08:05.064890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.003 [2024-11-06 14:08:05.065081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.003 [2024-11-06 14:08:05.065096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.003 [2024-11-06 14:08:05.071618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.003 [2024-11-06 14:08:05.071656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.003 [2024-11-06 14:08:05.071671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.003 [2024-11-06 14:08:05.079241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.003 [2024-11-06 14:08:05.079288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.003 [2024-11-06 14:08:05.079303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.003 [2024-11-06 14:08:05.083664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.003 [2024-11-06 14:08:05.083703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.003 [2024-11-06 14:08:05.083719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.003 [2024-11-06 14:08:05.088111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.003 [2024-11-06 14:08:05.088171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.003 [2024-11-06 14:08:05.088189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.003 [2024-11-06 14:08:05.092450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.003 [2024-11-06 14:08:05.092491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.003 [2024-11-06 14:08:05.092506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.003 [2024-11-06 14:08:05.097464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.003 [2024-11-06 14:08:05.097544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.003 [2024-11-06 14:08:05.097558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.004 [2024-11-06 14:08:05.104037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.004 [2024-11-06 14:08:05.104262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.004 [2024-11-06 14:08:05.104277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.004 [2024-11-06 14:08:05.112666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.004 [2024-11-06 14:08:05.112765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.004 [2024-11-06 14:08:05.112780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.004 [2024-11-06 14:08:05.119740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.004 [2024-11-06 14:08:05.119778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.004 [2024-11-06 14:08:05.119793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.004 [2024-11-06 14:08:05.124567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.004 [2024-11-06 14:08:05.124609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.004 [2024-11-06 14:08:05.124623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.004 [2024-11-06 14:08:05.128699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.004 [2024-11-06 14:08:05.128740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.004 [2024-11-06 14:08:05.128754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.004 [2024-11-06 14:08:05.133266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.004 [2024-11-06 14:08:05.133307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.004 [2024-11-06 14:08:05.133321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.004 [2024-11-06 14:08:05.139831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.004 [2024-11-06 14:08:05.140059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.004 [2024-11-06 14:08:05.140074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.004 [2024-11-06 14:08:05.145201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.004 [2024-11-06 14:08:05.145249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.004 [2024-11-06 14:08:05.145264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.004 [2024-11-06 14:08:05.149681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.004 [2024-11-06 14:08:05.149724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.004 [2024-11-06 14:08:05.149739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.004 [2024-11-06 14:08:05.155120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.004 [2024-11-06 14:08:05.155162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.004 [2024-11-06 14:08:05.155177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.004 [2024-11-06 14:08:05.160142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.004 [2024-11-06 14:08:05.160196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.004 [2024-11-06 14:08:05.160211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.004 [2024-11-06 14:08:05.164914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.004 [2024-11-06 14:08:05.164969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.004 [2024-11-06 14:08:05.164984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.004 [2024-11-06 14:08:05.169726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.004 [2024-11-06 14:08:05.169786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.004 [2024-11-06 14:08:05.169801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.004 [2024-11-06 14:08:05.176625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.004 [2024-11-06 14:08:05.176699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.004 [2024-11-06 14:08:05.176714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.004 [2024-11-06 14:08:05.181794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.004 [2024-11-06 14:08:05.181859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.004 [2024-11-06 14:08:05.181874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.004 [2024-11-06 14:08:05.187539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.004 [2024-11-06 14:08:05.187594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.004 [2024-11-06 14:08:05.187609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.004 [2024-11-06 14:08:05.190146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.004 [2024-11-06 14:08:05.190212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.004 [2024-11-06 14:08:05.190227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.004 [2024-11-06 14:08:05.192653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.004 [2024-11-06 14:08:05.192730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.004 [2024-11-06 14:08:05.192745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.004 [2024-11-06 14:08:05.195137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.004 [2024-11-06 14:08:05.195207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.004 [2024-11-06 14:08:05.195222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.004 [2024-11-06 14:08:05.197738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.004 [2024-11-06 14:08:05.197807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.004 [2024-11-06 14:08:05.197821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.004 [2024-11-06 14:08:05.200555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.004 [2024-11-06 14:08:05.200659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.004 [2024-11-06 14:08:05.200674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.004 [2024-11-06 14:08:05.204983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.004 [2024-11-06 14:08:05.205196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.004 [2024-11-06 14:08:05.205212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.004 [2024-11-06 14:08:05.214189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.004 [2024-11-06 14:08:05.214431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.004 [2024-11-06 14:08:05.214447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.004 [2024-11-06 14:08:05.223992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.004 [2024-11-06 14:08:05.224137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.004 [2024-11-06 14:08:05.224155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.004 [2024-11-06 14:08:05.229749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.004 [2024-11-06 14:08:05.229789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.004 [2024-11-06 14:08:05.229804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.004 [2024-11-06 14:08:05.237003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.004 [2024-11-06 14:08:05.237229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.004 [2024-11-06 14:08:05.237249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.004 [2024-11-06 14:08:05.242474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.004 [2024-11-06 14:08:05.242534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.004 [2024-11-06 14:08:05.242549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.004 [2024-11-06 14:08:05.246740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.005 [2024-11-06 14:08:05.246827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.005 [2024-11-06 14:08:05.246842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.005 [2024-11-06 14:08:05.249193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.005 [2024-11-06 14:08:05.249296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.005 [2024-11-06 14:08:05.249312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.005 [2024-11-06 14:08:05.251641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.005 [2024-11-06 14:08:05.251727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.005 [2024-11-06 14:08:05.251743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.005 [2024-11-06 14:08:05.254093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.005 [2024-11-06 14:08:05.254190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.005 [2024-11-06 14:08:05.254205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.005 [2024-11-06 14:08:05.256533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.005 [2024-11-06 14:08:05.256622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.005 [2024-11-06 14:08:05.256637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.005 [2024-11-06 14:08:05.258988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.005 [2024-11-06 14:08:05.259084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.005 [2024-11-06 14:08:05.259099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.005 5344.00 IOPS, 668.00 MiB/s [2024-11-06T13:08:05.289Z] [2024-11-06 14:08:05.263501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2354f00) with pdu=0x200016efef90 00:24:26.005 [2024-11-06 14:08:05.263714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.005 [2024-11-06 14:08:05.263730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.005 00:24:26.005 Latency(us) 00:24:26.005 [2024-11-06T13:08:05.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.005 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:26.005 nvme0n1 : 2.01 5337.37 667.17 0.00 0.00 2991.41 1160.53 11414.19 00:24:26.005 [2024-11-06T13:08:05.289Z] =================================================================================================================== 00:24:26.005 [2024-11-06T13:08:05.289Z] Total : 5337.37 667.17 0.00 0.00 2991.41 1160.53 11414.19 00:24:26.005 { 00:24:26.005 "results": [ 00:24:26.005 { 00:24:26.005 "job": "nvme0n1", 00:24:26.005 "core_mask": "0x2", 00:24:26.005 "workload": "randwrite", 00:24:26.005 "status": "finished", 00:24:26.005 "queue_depth": 16, 00:24:26.005 "io_size": 131072, 00:24:26.005 "runtime": 2.005483, 00:24:26.005 "iops": 5337.367606706215, 00:24:26.005 "mibps": 667.1709508382769, 00:24:26.005 "io_failed": 0, 00:24:26.005 "io_timeout": 0, 00:24:26.005 "avg_latency_us": 2991.407832585949, 00:24:26.005 "min_latency_us": 1160.5333333333333, 00:24:26.005 "max_latency_us": 11414.186666666666 00:24:26.005 } 00:24:26.005 ], 00:24:26.005 "core_count": 1 00:24:26.005 } 00:24:26.005 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:26.005 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:26.005 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:26.005 | .driver_specific 00:24:26.005 | .nvme_error 00:24:26.005 | .status_code 00:24:26.005 | .command_transient_transport_error' 00:24:26.005 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:26.265 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 345 > 0 )) 00:24:26.265 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1032107 00:24:26.265 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 1032107 ']' 00:24:26.265 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 1032107 00:24:26.265 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:24:26.265 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:26.265 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1032107 00:24:26.265 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:26.265 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:26.265 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1032107' 00:24:26.265 killing process with pid 1032107 00:24:26.265 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 1032107 00:24:26.265 Received shutdown signal, test time was about 2.000000 seconds 00:24:26.265 00:24:26.265 Latency(us) 00:24:26.265 [2024-11-06T13:08:05.549Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.265 [2024-11-06T13:08:05.549Z] =================================================================================================================== 00:24:26.265 [2024-11-06T13:08:05.549Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:26.265 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 1032107 00:24:26.525 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1030055 00:24:26.525 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 1030055 ']' 00:24:26.525 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 1030055 00:24:26.525 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:24:26.525 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:26.525 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1030055 00:24:26.525 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:26.525 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:26.525 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1030055' 00:24:26.525 killing process with pid 1030055 00:24:26.525 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 1030055 00:24:26.525 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 1030055 00:24:26.525 00:24:26.525 real 0m12.813s 00:24:26.525 user 0m25.017s 00:24:26.525 sys 0m3.051s 00:24:26.525 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:26.525 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:26.525 ************************************ 00:24:26.525 END TEST nvmf_digest_error 00:24:26.525 ************************************ 00:24:26.525 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:24:26.525 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:24:26.525 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:26.525 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:24:26.525 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:26.525 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:24:26.526 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:26.526 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:26.526 rmmod nvme_tcp 00:24:26.526 rmmod nvme_fabrics 00:24:26.526 rmmod nvme_keyring 00:24:26.786 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:26.786 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:24:26.786 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:24:26.786 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1030055 ']' 00:24:26.786 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1030055 00:24:26.786 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 1030055 ']' 00:24:26.786 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 1030055 00:24:26.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1030055) - No such process 00:24:26.786 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 1030055 is not found' 00:24:26.786 Process with pid 1030055 is not found 00:24:26.786 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:26.786 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:26.786 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:26.786 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:24:26.786 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:24:26.786 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:26.786 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:24:26.786 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:26.786 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:26.786 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.786 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.786 14:08:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.692 14:08:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:28.692 00:24:28.692 real 0m35.112s 00:24:28.692 user 0m54.389s 00:24:28.692 sys 0m10.429s 00:24:28.692 14:08:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:28.692 14:08:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:28.692 ************************************ 00:24:28.692 END TEST nvmf_digest 00:24:28.692 ************************************ 00:24:28.692 14:08:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:24:28.692 14:08:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:24:28.692 14:08:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:24:28.692 14:08:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:28.692 14:08:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:28.692 14:08:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:28.692 14:08:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.692 ************************************ 00:24:28.692 START TEST nvmf_bdevperf 00:24:28.692 ************************************ 00:24:28.692 14:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:28.692 * Looking for test storage... 00:24:28.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:28.692 14:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:28.692 14:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:24:28.692 14:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:28.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.953 --rc genhtml_branch_coverage=1 00:24:28.953 --rc genhtml_function_coverage=1 00:24:28.953 --rc genhtml_legend=1 00:24:28.953 --rc geninfo_all_blocks=1 00:24:28.953 --rc geninfo_unexecuted_blocks=1 00:24:28.953 00:24:28.953 ' 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:28.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.953 --rc genhtml_branch_coverage=1 00:24:28.953 --rc genhtml_function_coverage=1 00:24:28.953 --rc genhtml_legend=1 00:24:28.953 --rc geninfo_all_blocks=1 00:24:28.953 --rc geninfo_unexecuted_blocks=1 00:24:28.953 00:24:28.953 ' 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:28.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.953 --rc genhtml_branch_coverage=1 00:24:28.953 --rc genhtml_function_coverage=1 00:24:28.953 --rc genhtml_legend=1 00:24:28.953 --rc geninfo_all_blocks=1 00:24:28.953 --rc geninfo_unexecuted_blocks=1 00:24:28.953 00:24:28.953 ' 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:28.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.953 --rc genhtml_branch_coverage=1 00:24:28.953 --rc genhtml_function_coverage=1 00:24:28.953 --rc genhtml_legend=1 00:24:28.953 --rc geninfo_all_blocks=1 00:24:28.953 --rc geninfo_unexecuted_blocks=1 00:24:28.953 00:24:28.953 ' 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:24:28.953 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.954 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:24:28.954 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:28.954 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:28.954 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.954 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.954 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.954 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:28.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:28.954 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:28.954 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:28.954 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:28.954 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:28.954 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:28.954 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:24:28.954 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:28.954 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.954 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:28.954 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:28.954 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:28.954 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.954 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.954 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.954 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:28.954 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:28.954 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:28.954 14:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:34.232 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:34.232 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:34.232 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:34.233 Found net devices under 0000:31:00.0: cvl_0_0 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:34.233 Found net devices under 0000:31:00.1: cvl_0_1 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:34.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:34.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.566 ms 00:24:34.233 00:24:34.233 --- 10.0.0.2 ping statistics --- 00:24:34.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.233 rtt min/avg/max/mdev = 0.566/0.566/0.566/0.000 ms 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:34.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:34.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:24:34.233 00:24:34.233 --- 10.0.0.1 ping statistics --- 00:24:34.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.233 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1037136 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1037136 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 1037136 ']' 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:34.233 14:08:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:34.233 [2024-11-06 14:08:13.469148] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:24:34.233 [2024-11-06 14:08:13.469196] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.493 [2024-11-06 14:08:13.540701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:34.493 [2024-11-06 14:08:13.570084] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.493 [2024-11-06 14:08:13.570116] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.493 [2024-11-06 14:08:13.570122] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:34.493 [2024-11-06 14:08:13.570127] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:34.493 [2024-11-06 14:08:13.570131] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.493 [2024-11-06 14:08:13.571234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:34.493 [2024-11-06 14:08:13.571386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:34.493 [2024-11-06 14:08:13.571471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:35.062 14:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:35.063 14:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:24:35.063 14:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:35.063 14:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:35.063 14:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:35.063 14:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:35.063 14:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:35.063 14:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.063 14:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:35.063 [2024-11-06 14:08:14.276616] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.063 14:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.063 14:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:35.063 14:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.063 14:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:35.063 Malloc0 00:24:35.063 14:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.063 14:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:35.063 14:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.063 14:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:35.063 14:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.063 14:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:35.063 14:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.063 14:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:35.063 14:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.063 14:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:35.063 14:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.063 14:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:35.063 [2024-11-06 14:08:14.326729] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.063 14:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.063 14:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:24:35.063 14:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:24:35.063 14:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:24:35.063 14:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:24:35.063 14:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:35.063 14:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:35.063 { 00:24:35.063 "params": { 00:24:35.063 "name": "Nvme$subsystem", 00:24:35.063 "trtype": "$TEST_TRANSPORT", 00:24:35.063 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:35.063 "adrfam": "ipv4", 00:24:35.063 "trsvcid": "$NVMF_PORT", 00:24:35.063 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:35.063 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:35.063 "hdgst": ${hdgst:-false}, 00:24:35.063 "ddgst": ${ddgst:-false} 00:24:35.063 }, 00:24:35.063 "method": "bdev_nvme_attach_controller" 00:24:35.063 } 00:24:35.063 EOF 00:24:35.063 )") 00:24:35.063 14:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:24:35.063 14:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:24:35.063 14:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:24:35.063 14:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:35.063 "params": { 00:24:35.063 "name": "Nvme1", 00:24:35.063 "trtype": "tcp", 00:24:35.063 "traddr": "10.0.0.2", 00:24:35.063 "adrfam": "ipv4", 00:24:35.063 "trsvcid": "4420", 00:24:35.063 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.063 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:35.063 "hdgst": false, 00:24:35.063 "ddgst": false 00:24:35.063 }, 00:24:35.063 "method": "bdev_nvme_attach_controller" 00:24:35.063 }' 00:24:35.322 [2024-11-06 14:08:14.364329] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:24:35.322 [2024-11-06 14:08:14.364378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1037484 ] 00:24:35.322 [2024-11-06 14:08:14.442713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.322 [2024-11-06 14:08:14.479060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.582 Running I/O for 1 seconds... 00:24:36.520 11167.00 IOPS, 43.62 MiB/s 00:24:36.520 Latency(us) 00:24:36.520 [2024-11-06T13:08:15.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:36.520 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:36.520 Verification LBA range: start 0x0 length 0x4000 00:24:36.520 Nvme1n1 : 1.00 11249.78 43.94 0.00 0.00 11326.45 1740.80 9885.01 00:24:36.520 [2024-11-06T13:08:15.804Z] =================================================================================================================== 00:24:36.520 [2024-11-06T13:08:15.804Z] Total : 11249.78 43.94 0.00 0.00 11326.45 1740.80 9885.01 00:24:36.778 14:08:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1037825 00:24:36.778 14:08:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:24:36.778 14:08:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:24:36.778 14:08:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:24:36.778 14:08:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:24:36.778 14:08:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:24:36.778 14:08:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:36.778 14:08:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:36.778 { 00:24:36.778 "params": { 00:24:36.778 "name": "Nvme$subsystem", 00:24:36.778 "trtype": "$TEST_TRANSPORT", 00:24:36.778 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:36.778 "adrfam": "ipv4", 00:24:36.778 "trsvcid": "$NVMF_PORT", 00:24:36.778 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:36.778 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:36.778 "hdgst": ${hdgst:-false}, 00:24:36.778 "ddgst": ${ddgst:-false} 00:24:36.778 }, 00:24:36.778 "method": "bdev_nvme_attach_controller" 00:24:36.778 } 00:24:36.778 EOF 00:24:36.778 )") 00:24:36.778 14:08:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:24:36.778 14:08:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:24:36.778 14:08:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:24:36.778 14:08:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:36.778 "params": { 00:24:36.778 "name": "Nvme1", 00:24:36.778 "trtype": "tcp", 00:24:36.778 "traddr": "10.0.0.2", 00:24:36.778 "adrfam": "ipv4", 00:24:36.778 "trsvcid": "4420", 00:24:36.778 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:36.778 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:36.778 "hdgst": false, 00:24:36.778 "ddgst": false 00:24:36.778 }, 00:24:36.778 "method": "bdev_nvme_attach_controller" 00:24:36.778 }' 00:24:36.778 [2024-11-06 14:08:15.871684] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:24:36.778 [2024-11-06 14:08:15.871737] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1037825 ] 00:24:36.778 [2024-11-06 14:08:15.950209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.778 [2024-11-06 14:08:15.985159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.037 Running I/O for 15 seconds... 00:24:39.354 11437.00 IOPS, 44.68 MiB/s [2024-11-06T13:08:18.901Z] 11932.50 IOPS, 46.61 MiB/s [2024-11-06T13:08:18.901Z] 14:08:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1037136 00:24:39.617 14:08:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:24:39.617 [2024-11-06 14:08:18.854485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:123304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.617 [2024-11-06 14:08:18.854520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.617 [2024-11-06 14:08:18.854536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:123312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.617 [2024-11-06 14:08:18.854545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.617 [2024-11-06 14:08:18.854553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:123320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.617 [2024-11-06 14:08:18.854560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.617 [2024-11-06 14:08:18.854567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:123328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.617 [2024-11-06 14:08:18.854574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.617 [2024-11-06 14:08:18.854582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:123336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.617 [2024-11-06 14:08:18.854588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.617 [2024-11-06 14:08:18.854595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:123344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.617 [2024-11-06 14:08:18.854601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.617 [2024-11-06 14:08:18.854609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:123352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.617 [2024-11-06 14:08:18.854616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.617 [2024-11-06 14:08:18.854622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:123360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.617 [2024-11-06 14:08:18.854629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.617 [2024-11-06 14:08:18.854637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:123368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.617 [2024-11-06 14:08:18.854643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.617 [2024-11-06 14:08:18.854650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:123376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.617 [2024-11-06 14:08:18.854658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.617 [2024-11-06 14:08:18.854666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:123384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.617 [2024-11-06 14:08:18.854671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.617 [2024-11-06 14:08:18.854680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:123392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.617 [2024-11-06 14:08:18.854686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.617 [2024-11-06 14:08:18.854695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:123400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.617 [2024-11-06 14:08:18.854706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.617 [2024-11-06 14:08:18.854713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.617 [2024-11-06 14:08:18.854719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.617 [2024-11-06 14:08:18.854727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:123416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.617 [2024-11-06 14:08:18.854732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.617 [2024-11-06 14:08:18.854742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:123424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.617 [2024-11-06 14:08:18.854749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.617 [2024-11-06 14:08:18.854757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:123432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.617 [2024-11-06 14:08:18.854764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.617 [2024-11-06 14:08:18.854771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:123440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.617 [2024-11-06 14:08:18.854778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.617 [2024-11-06 14:08:18.854785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:123448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.617 [2024-11-06 14:08:18.854791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.617 [2024-11-06 14:08:18.854800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:123456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.617 [2024-11-06 14:08:18.854809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.617 [2024-11-06 14:08:18.854816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:123464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.617 [2024-11-06 14:08:18.854823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.617 [2024-11-06 14:08:18.854831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:123472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.617 [2024-11-06 14:08:18.854836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.617 [2024-11-06 14:08:18.854843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:123480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.617 [2024-11-06 14:08:18.854849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.617 [2024-11-06 14:08:18.854856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:123488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.617 [2024-11-06 14:08:18.854863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.617 [2024-11-06 14:08:18.854870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:123496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.617 [2024-11-06 14:08:18.854882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.617 [2024-11-06 14:08:18.854897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:123504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.617 [2024-11-06 14:08:18.854908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.617 [2024-11-06 14:08:18.854920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:123512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.617 [2024-11-06 14:08:18.854927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.617 [2024-11-06 14:08:18.854936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:123520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.854948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.854958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:123528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.854970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.854981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:123536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.854991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.855003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:123544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.855011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.855020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:123552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.855026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.855038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:123560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.855045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.855054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:123568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.855062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.855071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:123576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.855079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.855086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:123584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.855094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.855101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:123592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.855107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.855114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:123600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.855121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.855128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:123608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.855133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.855140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:123616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.855145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.855151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:123624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.855156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.855163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:123632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.855168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.855175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:123640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.855180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.855186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:123648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.855191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.855198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:123656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.855204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.855210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:123664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.855215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.855222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:123672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.855227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.855234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:123680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.855239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.855248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:123688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.855257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.855263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:123696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.855268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.855276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:123704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.855281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.855288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:123712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.855293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.855300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:123720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.855305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.855312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:123728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.855317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.855324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:123736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.855329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.855336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:123744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.855341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.855347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:123752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.855352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.855359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:123760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.855364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.855370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.855375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.855382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.855386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.855394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:123784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.855399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.855406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:123792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.855411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.855417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:123800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.855423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.855430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:123808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.855435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.855441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:123816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.855447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.855455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:123824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.855460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.618 [2024-11-06 14:08:18.855467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:123832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.618 [2024-11-06 14:08:18.855472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:123840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:123856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:123864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:123872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:123880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:123888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:123896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:123904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:123912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:123920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:123928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:123936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:123944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:123952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:123960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:123968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:123976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:123984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:123992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:124000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:124008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:124016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:124024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:124032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:124040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:124048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:124056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:124064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:124072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:124080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:124088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:124096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:124104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:124112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:124120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:124128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:124136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:124144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.619 [2024-11-06 14:08:18.855950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.619 [2024-11-06 14:08:18.855956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:124152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.620 [2024-11-06 14:08:18.855962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.620 [2024-11-06 14:08:18.855968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:124160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.620 [2024-11-06 14:08:18.855973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.620 [2024-11-06 14:08:18.855980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:124168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.620 [2024-11-06 14:08:18.855986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.620 [2024-11-06 14:08:18.855992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:124176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.620 [2024-11-06 14:08:18.855998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.620 [2024-11-06 14:08:18.856004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:124184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.620 [2024-11-06 14:08:18.856009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.620 [2024-11-06 14:08:18.856016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:124192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.620 [2024-11-06 14:08:18.856022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.620 [2024-11-06 14:08:18.856028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:124200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.620 [2024-11-06 14:08:18.856035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.620 [2024-11-06 14:08:18.856041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:124208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.620 [2024-11-06 14:08:18.856046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.620 [2024-11-06 14:08:18.856054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:124216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.620 [2024-11-06 14:08:18.856059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.620 [2024-11-06 14:08:18.856065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:124224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.620 [2024-11-06 14:08:18.856071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.620 [2024-11-06 14:08:18.856077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:124232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.620 [2024-11-06 14:08:18.856082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.620 [2024-11-06 14:08:18.856089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.620 [2024-11-06 14:08:18.856095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.620 [2024-11-06 14:08:18.856101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:124248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.620 [2024-11-06 14:08:18.856107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.620 [2024-11-06 14:08:18.856114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:124256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.620 [2024-11-06 14:08:18.856119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.620 [2024-11-06 14:08:18.856126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:124264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.620 [2024-11-06 14:08:18.856132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.620 [2024-11-06 14:08:18.856138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:124272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.620 [2024-11-06 14:08:18.856144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.620 [2024-11-06 14:08:18.856151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:124280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.620 [2024-11-06 14:08:18.856156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.620 [2024-11-06 14:08:18.856163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.620 [2024-11-06 14:08:18.856169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.620 [2024-11-06 14:08:18.856176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:124296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.620 [2024-11-06 14:08:18.856182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.620 [2024-11-06 14:08:18.856188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:124304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.620 [2024-11-06 14:08:18.856194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.620 [2024-11-06 14:08:18.856201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:124312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.620 [2024-11-06 14:08:18.856206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.620 [2024-11-06 14:08:18.856213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c950 is same with the state(6) to be set 00:24:39.620 [2024-11-06 14:08:18.856220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.620 [2024-11-06 14:08:18.856225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.620 [2024-11-06 14:08:18.856229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124320 len:8 PRP1 0x0 PRP2 0x0 00:24:39.620 [2024-11-06 14:08:18.856236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.620 [2024-11-06 14:08:18.858768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:39.620 [2024-11-06 14:08:18.858813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:39.620 [2024-11-06 14:08:18.859480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.620 [2024-11-06 14:08:18.859511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:39.620 [2024-11-06 14:08:18.859521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:39.620 [2024-11-06 14:08:18.859691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:39.620 [2024-11-06 14:08:18.859845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:39.620 [2024-11-06 14:08:18.859851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:39.620 [2024-11-06 14:08:18.859858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:39.620 [2024-11-06 14:08:18.859866] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:39.620 [2024-11-06 14:08:18.871662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:39.620 [2024-11-06 14:08:18.872131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.620 [2024-11-06 14:08:18.872146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:39.620 [2024-11-06 14:08:18.872152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:39.620 [2024-11-06 14:08:18.872339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:39.620 [2024-11-06 14:08:18.872492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:39.620 [2024-11-06 14:08:18.872499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:39.620 [2024-11-06 14:08:18.872505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:39.620 [2024-11-06 14:08:18.872510] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:39.620 [2024-11-06 14:08:18.884319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:39.620 [2024-11-06 14:08:18.884891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.620 [2024-11-06 14:08:18.884923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:39.620 [2024-11-06 14:08:18.884932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:39.620 [2024-11-06 14:08:18.885098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:39.620 [2024-11-06 14:08:18.885259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:39.620 [2024-11-06 14:08:18.885267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:39.620 [2024-11-06 14:08:18.885273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:39.620 [2024-11-06 14:08:18.885279] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:39.620 [2024-11-06 14:08:18.896925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:39.620 [2024-11-06 14:08:18.897531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.620 [2024-11-06 14:08:18.897563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:39.620 [2024-11-06 14:08:18.897572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:39.620 [2024-11-06 14:08:18.897738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:39.620 [2024-11-06 14:08:18.897891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:39.620 [2024-11-06 14:08:18.897898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:39.620 [2024-11-06 14:08:18.897903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:39.620 [2024-11-06 14:08:18.897910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:39.882 [2024-11-06 14:08:18.909573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:39.882 [2024-11-06 14:08:18.910136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.882 [2024-11-06 14:08:18.910167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:39.882 [2024-11-06 14:08:18.910176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:39.882 [2024-11-06 14:08:18.910351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:39.882 [2024-11-06 14:08:18.910506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:39.882 [2024-11-06 14:08:18.910513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:39.882 [2024-11-06 14:08:18.910519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:39.882 [2024-11-06 14:08:18.910525] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:39.882 [2024-11-06 14:08:18.922169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:39.882 [2024-11-06 14:08:18.922749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.882 [2024-11-06 14:08:18.922781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:39.882 [2024-11-06 14:08:18.922790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:39.882 [2024-11-06 14:08:18.922959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:39.882 [2024-11-06 14:08:18.923112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:39.882 [2024-11-06 14:08:18.923119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:39.882 [2024-11-06 14:08:18.923125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:39.882 [2024-11-06 14:08:18.923131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:39.882 [2024-11-06 14:08:18.934784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:39.882 [2024-11-06 14:08:18.935395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.882 [2024-11-06 14:08:18.935426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:39.882 [2024-11-06 14:08:18.935435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:39.882 [2024-11-06 14:08:18.935601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:39.882 [2024-11-06 14:08:18.935753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:39.882 [2024-11-06 14:08:18.935760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:39.882 [2024-11-06 14:08:18.935766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:39.882 [2024-11-06 14:08:18.935772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:39.882 [2024-11-06 14:08:18.947420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:39.882 [2024-11-06 14:08:18.947975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.882 [2024-11-06 14:08:18.948006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:39.882 [2024-11-06 14:08:18.948015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:39.882 [2024-11-06 14:08:18.948181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:39.882 [2024-11-06 14:08:18.948342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:39.882 [2024-11-06 14:08:18.948350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:39.882 [2024-11-06 14:08:18.948355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:39.882 [2024-11-06 14:08:18.948362] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:39.882 [2024-11-06 14:08:18.960137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:39.882 [2024-11-06 14:08:18.960750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.882 [2024-11-06 14:08:18.960782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:39.882 [2024-11-06 14:08:18.960791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:39.882 [2024-11-06 14:08:18.960956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:39.882 [2024-11-06 14:08:18.961109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:39.882 [2024-11-06 14:08:18.961120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:39.882 [2024-11-06 14:08:18.961126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:39.882 [2024-11-06 14:08:18.961132] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:39.882 [2024-11-06 14:08:18.972783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:39.882 [2024-11-06 14:08:18.973279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.882 [2024-11-06 14:08:18.973295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:39.883 [2024-11-06 14:08:18.973301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:39.883 [2024-11-06 14:08:18.973451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:39.883 [2024-11-06 14:08:18.973602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:39.883 [2024-11-06 14:08:18.973609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:39.883 [2024-11-06 14:08:18.973614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:39.883 [2024-11-06 14:08:18.973620] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:39.883 [2024-11-06 14:08:18.985413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:39.883 [2024-11-06 14:08:18.986002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.883 [2024-11-06 14:08:18.986034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:39.883 [2024-11-06 14:08:18.986042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:39.883 [2024-11-06 14:08:18.986208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:39.883 [2024-11-06 14:08:18.986369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:39.883 [2024-11-06 14:08:18.986378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:39.883 [2024-11-06 14:08:18.986383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:39.883 [2024-11-06 14:08:18.986389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:39.883 [2024-11-06 14:08:18.998033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:39.883 [2024-11-06 14:08:18.998628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.883 [2024-11-06 14:08:18.998660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:39.883 [2024-11-06 14:08:18.998669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:39.883 [2024-11-06 14:08:18.998834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:39.883 [2024-11-06 14:08:18.998987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:39.883 [2024-11-06 14:08:18.998994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:39.883 [2024-11-06 14:08:18.999000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:39.883 [2024-11-06 14:08:18.999009] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:39.883 [2024-11-06 14:08:19.010660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:39.883 [2024-11-06 14:08:19.011241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.883 [2024-11-06 14:08:19.011279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:39.883 [2024-11-06 14:08:19.011287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:39.883 [2024-11-06 14:08:19.011452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:39.883 [2024-11-06 14:08:19.011606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:39.883 [2024-11-06 14:08:19.011613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:39.883 [2024-11-06 14:08:19.011619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:39.883 [2024-11-06 14:08:19.011624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:39.883 [2024-11-06 14:08:19.023298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:39.883 [2024-11-06 14:08:19.023840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.883 [2024-11-06 14:08:19.023872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:39.883 [2024-11-06 14:08:19.023880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:39.883 [2024-11-06 14:08:19.024046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:39.883 [2024-11-06 14:08:19.024199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:39.883 [2024-11-06 14:08:19.024206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:39.883 [2024-11-06 14:08:19.024212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:39.883 [2024-11-06 14:08:19.024218] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:39.883 [2024-11-06 14:08:19.036004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:39.883 [2024-11-06 14:08:19.036412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.883 [2024-11-06 14:08:19.036444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:39.883 [2024-11-06 14:08:19.036453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:39.883 [2024-11-06 14:08:19.036621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:39.883 [2024-11-06 14:08:19.036774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:39.883 [2024-11-06 14:08:19.036781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:39.883 [2024-11-06 14:08:19.036787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:39.883 [2024-11-06 14:08:19.036793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:39.883 [2024-11-06 14:08:19.048723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:39.883 [2024-11-06 14:08:19.049316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.883 [2024-11-06 14:08:19.049352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:39.883 [2024-11-06 14:08:19.049361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:39.883 [2024-11-06 14:08:19.049528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:39.883 [2024-11-06 14:08:19.049681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:39.883 [2024-11-06 14:08:19.049688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:39.883 [2024-11-06 14:08:19.049694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:39.883 [2024-11-06 14:08:19.049700] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:39.883 [2024-11-06 14:08:19.061350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:39.883 [2024-11-06 14:08:19.061919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.883 [2024-11-06 14:08:19.061951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:39.883 [2024-11-06 14:08:19.061960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:39.883 [2024-11-06 14:08:19.062126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:39.883 [2024-11-06 14:08:19.062287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:39.883 [2024-11-06 14:08:19.062295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:39.883 [2024-11-06 14:08:19.062301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:39.883 [2024-11-06 14:08:19.062307] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:39.883 [2024-11-06 14:08:19.073950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:39.883 [2024-11-06 14:08:19.074552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.883 [2024-11-06 14:08:19.074584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:39.883 [2024-11-06 14:08:19.074593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:39.883 [2024-11-06 14:08:19.074758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:39.883 [2024-11-06 14:08:19.074912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:39.883 [2024-11-06 14:08:19.074919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:39.883 [2024-11-06 14:08:19.074925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:39.883 [2024-11-06 14:08:19.074931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:39.883 [2024-11-06 14:08:19.086619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:39.883 [2024-11-06 14:08:19.087255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.883 [2024-11-06 14:08:19.087287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:39.883 [2024-11-06 14:08:19.087295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:39.883 [2024-11-06 14:08:19.087464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:39.883 [2024-11-06 14:08:19.087618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:39.883 [2024-11-06 14:08:19.087625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:39.883 [2024-11-06 14:08:19.087631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:39.883 [2024-11-06 14:08:19.087637] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:39.883 [2024-11-06 14:08:19.099280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:39.883 [2024-11-06 14:08:19.099887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.883 [2024-11-06 14:08:19.099918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:39.884 [2024-11-06 14:08:19.099927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:39.884 [2024-11-06 14:08:19.100093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:39.884 [2024-11-06 14:08:19.100254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:39.884 [2024-11-06 14:08:19.100262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:39.884 [2024-11-06 14:08:19.100268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:39.884 [2024-11-06 14:08:19.100274] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:39.884 [2024-11-06 14:08:19.111911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:39.884 [2024-11-06 14:08:19.112555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.884 [2024-11-06 14:08:19.112586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:39.884 [2024-11-06 14:08:19.112595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:39.884 [2024-11-06 14:08:19.112761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:39.884 [2024-11-06 14:08:19.112914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:39.884 [2024-11-06 14:08:19.112922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:39.884 [2024-11-06 14:08:19.112928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:39.884 [2024-11-06 14:08:19.112934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:39.884 [2024-11-06 14:08:19.124603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:39.884 [2024-11-06 14:08:19.125056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.884 [2024-11-06 14:08:19.125073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:39.884 [2024-11-06 14:08:19.125079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:39.884 [2024-11-06 14:08:19.125229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:39.884 [2024-11-06 14:08:19.125386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:39.884 [2024-11-06 14:08:19.125396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:39.884 [2024-11-06 14:08:19.125402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:39.884 [2024-11-06 14:08:19.125408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:39.884 [2024-11-06 14:08:19.137326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:39.884 [2024-11-06 14:08:19.137882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.884 [2024-11-06 14:08:19.137913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:39.884 [2024-11-06 14:08:19.137922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:39.884 [2024-11-06 14:08:19.138088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:39.884 [2024-11-06 14:08:19.138241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:39.884 [2024-11-06 14:08:19.138257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:39.884 [2024-11-06 14:08:19.138262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:39.884 [2024-11-06 14:08:19.138268] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:39.884 [2024-11-06 14:08:19.149967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:39.884 [2024-11-06 14:08:19.150535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.884 [2024-11-06 14:08:19.150567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:39.884 [2024-11-06 14:08:19.150576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:39.884 [2024-11-06 14:08:19.150742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:39.884 [2024-11-06 14:08:19.150895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:39.884 [2024-11-06 14:08:19.150902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:39.884 [2024-11-06 14:08:19.150908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:39.884 [2024-11-06 14:08:19.150914] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:39.884 [2024-11-06 14:08:19.162567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:39.884 [2024-11-06 14:08:19.163140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.884 [2024-11-06 14:08:19.163172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:39.884 [2024-11-06 14:08:19.163181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:39.884 [2024-11-06 14:08:19.163355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:39.884 [2024-11-06 14:08:19.163510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:39.884 [2024-11-06 14:08:19.163517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:39.884 [2024-11-06 14:08:19.163524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:39.884 [2024-11-06 14:08:19.163529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.144 [2024-11-06 14:08:19.175175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.144 [2024-11-06 14:08:19.175742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.144 [2024-11-06 14:08:19.175773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.144 [2024-11-06 14:08:19.175782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.144 [2024-11-06 14:08:19.175947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.144 [2024-11-06 14:08:19.176100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.144 [2024-11-06 14:08:19.176107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.144 [2024-11-06 14:08:19.176113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.144 [2024-11-06 14:08:19.176119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.144 [2024-11-06 14:08:19.187783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.144 [2024-11-06 14:08:19.188390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.144 [2024-11-06 14:08:19.188422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.144 [2024-11-06 14:08:19.188431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.145 [2024-11-06 14:08:19.188597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.145 [2024-11-06 14:08:19.188749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.145 [2024-11-06 14:08:19.188757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.145 [2024-11-06 14:08:19.188763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.145 [2024-11-06 14:08:19.188768] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.145 [2024-11-06 14:08:19.200418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.145 [2024-11-06 14:08:19.200874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.145 [2024-11-06 14:08:19.200906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.145 [2024-11-06 14:08:19.200915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.145 [2024-11-06 14:08:19.201080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.145 [2024-11-06 14:08:19.201234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.145 [2024-11-06 14:08:19.201241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.145 [2024-11-06 14:08:19.201258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.145 [2024-11-06 14:08:19.201268] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.145 [2024-11-06 14:08:19.213058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.145 [2024-11-06 14:08:19.213664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.145 [2024-11-06 14:08:19.213699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.145 [2024-11-06 14:08:19.213708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.145 [2024-11-06 14:08:19.213873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.145 [2024-11-06 14:08:19.214027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.145 [2024-11-06 14:08:19.214034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.145 [2024-11-06 14:08:19.214040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.145 [2024-11-06 14:08:19.214045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.145 [2024-11-06 14:08:19.225695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.145 [2024-11-06 14:08:19.226177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.145 [2024-11-06 14:08:19.226193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.145 [2024-11-06 14:08:19.226198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.145 [2024-11-06 14:08:19.226355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.145 [2024-11-06 14:08:19.226506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.145 [2024-11-06 14:08:19.226513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.145 [2024-11-06 14:08:19.226518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.145 [2024-11-06 14:08:19.226523] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.145 10599.00 IOPS, 41.40 MiB/s [2024-11-06T13:08:19.429Z] [2024-11-06 14:08:19.238294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.145 [2024-11-06 14:08:19.238886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.145 [2024-11-06 14:08:19.238917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.145 [2024-11-06 14:08:19.238926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.145 [2024-11-06 14:08:19.239092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.145 [2024-11-06 14:08:19.239254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.145 [2024-11-06 14:08:19.239262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.145 [2024-11-06 14:08:19.239267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.145 [2024-11-06 14:08:19.239273] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.145 [2024-11-06 14:08:19.250954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.145 [2024-11-06 14:08:19.251565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.145 [2024-11-06 14:08:19.251597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.145 [2024-11-06 14:08:19.251606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.145 [2024-11-06 14:08:19.251775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.145 [2024-11-06 14:08:19.251928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.145 [2024-11-06 14:08:19.251935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.145 [2024-11-06 14:08:19.251941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.145 [2024-11-06 14:08:19.251947] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.145 [2024-11-06 14:08:19.263589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.145 [2024-11-06 14:08:19.264187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.145 [2024-11-06 14:08:19.264219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.145 [2024-11-06 14:08:19.264227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.145 [2024-11-06 14:08:19.264400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.145 [2024-11-06 14:08:19.264555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.145 [2024-11-06 14:08:19.264562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.145 [2024-11-06 14:08:19.264567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.145 [2024-11-06 14:08:19.264573] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.145 [2024-11-06 14:08:19.276249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.145 [2024-11-06 14:08:19.276852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.145 [2024-11-06 14:08:19.276884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.145 [2024-11-06 14:08:19.276893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.145 [2024-11-06 14:08:19.277058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.145 [2024-11-06 14:08:19.277212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.145 [2024-11-06 14:08:19.277219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.145 [2024-11-06 14:08:19.277225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.145 [2024-11-06 14:08:19.277231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.145 [2024-11-06 14:08:19.288911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.145 [2024-11-06 14:08:19.289380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.145 [2024-11-06 14:08:19.289411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.145 [2024-11-06 14:08:19.289420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.145 [2024-11-06 14:08:19.289588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.145 [2024-11-06 14:08:19.289741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.145 [2024-11-06 14:08:19.289752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.145 [2024-11-06 14:08:19.289758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.145 [2024-11-06 14:08:19.289764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.145 [2024-11-06 14:08:19.301554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.145 [2024-11-06 14:08:19.302145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.145 [2024-11-06 14:08:19.302177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.145 [2024-11-06 14:08:19.302186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.145 [2024-11-06 14:08:19.302359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.145 [2024-11-06 14:08:19.302513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.145 [2024-11-06 14:08:19.302521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.145 [2024-11-06 14:08:19.302527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.145 [2024-11-06 14:08:19.302533] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.145 [2024-11-06 14:08:19.314180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.145 [2024-11-06 14:08:19.314773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.146 [2024-11-06 14:08:19.314805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.146 [2024-11-06 14:08:19.314814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.146 [2024-11-06 14:08:19.314979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.146 [2024-11-06 14:08:19.315133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.146 [2024-11-06 14:08:19.315140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.146 [2024-11-06 14:08:19.315146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.146 [2024-11-06 14:08:19.315153] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.146 [2024-11-06 14:08:19.326798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.146 [2024-11-06 14:08:19.327397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.146 [2024-11-06 14:08:19.327428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.146 [2024-11-06 14:08:19.327437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.146 [2024-11-06 14:08:19.327602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.146 [2024-11-06 14:08:19.327756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.146 [2024-11-06 14:08:19.327763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.146 [2024-11-06 14:08:19.327769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.146 [2024-11-06 14:08:19.327774] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.146 [2024-11-06 14:08:19.339430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.146 [2024-11-06 14:08:19.339987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.146 [2024-11-06 14:08:19.340019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.146 [2024-11-06 14:08:19.340028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.146 [2024-11-06 14:08:19.340193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.146 [2024-11-06 14:08:19.340355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.146 [2024-11-06 14:08:19.340363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.146 [2024-11-06 14:08:19.340369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.146 [2024-11-06 14:08:19.340374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.146 [2024-11-06 14:08:19.352155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.146 [2024-11-06 14:08:19.352733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.146 [2024-11-06 14:08:19.352764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.146 [2024-11-06 14:08:19.352773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.146 [2024-11-06 14:08:19.352938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.146 [2024-11-06 14:08:19.353091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.146 [2024-11-06 14:08:19.353098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.146 [2024-11-06 14:08:19.353105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.146 [2024-11-06 14:08:19.353111] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.146 [2024-11-06 14:08:19.364761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.146 [2024-11-06 14:08:19.365227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.146 [2024-11-06 14:08:19.365243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.146 [2024-11-06 14:08:19.365255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.146 [2024-11-06 14:08:19.365406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.146 [2024-11-06 14:08:19.365557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.146 [2024-11-06 14:08:19.365564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.146 [2024-11-06 14:08:19.365570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.146 [2024-11-06 14:08:19.365575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.146 [2024-11-06 14:08:19.377374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.146 [2024-11-06 14:08:19.377857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.146 [2024-11-06 14:08:19.377874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.146 [2024-11-06 14:08:19.377880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.146 [2024-11-06 14:08:19.378030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.146 [2024-11-06 14:08:19.378180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.146 [2024-11-06 14:08:19.378187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.146 [2024-11-06 14:08:19.378192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.146 [2024-11-06 14:08:19.378197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.146 [2024-11-06 14:08:19.389984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.146 [2024-11-06 14:08:19.390452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.146 [2024-11-06 14:08:19.390466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.146 [2024-11-06 14:08:19.390472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.146 [2024-11-06 14:08:19.390622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.146 [2024-11-06 14:08:19.390772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.146 [2024-11-06 14:08:19.390779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.146 [2024-11-06 14:08:19.390784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.146 [2024-11-06 14:08:19.390790] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.146 [2024-11-06 14:08:19.402590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.146 [2024-11-06 14:08:19.403045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.146 [2024-11-06 14:08:19.403060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.146 [2024-11-06 14:08:19.403066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.146 [2024-11-06 14:08:19.403215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.146 [2024-11-06 14:08:19.403371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.146 [2024-11-06 14:08:19.403379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.146 [2024-11-06 14:08:19.403384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.146 [2024-11-06 14:08:19.403390] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.146 [2024-11-06 14:08:19.415312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.146 [2024-11-06 14:08:19.415753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.146 [2024-11-06 14:08:19.415766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.146 [2024-11-06 14:08:19.415772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.146 [2024-11-06 14:08:19.415927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.146 [2024-11-06 14:08:19.416077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.146 [2024-11-06 14:08:19.416084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.146 [2024-11-06 14:08:19.416090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.146 [2024-11-06 14:08:19.416095] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.406 [2024-11-06 14:08:19.428021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.406 [2024-11-06 14:08:19.428578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.406 [2024-11-06 14:08:19.428609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.406 [2024-11-06 14:08:19.428619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.406 [2024-11-06 14:08:19.428784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.406 [2024-11-06 14:08:19.428938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.406 [2024-11-06 14:08:19.428945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.406 [2024-11-06 14:08:19.428950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.406 [2024-11-06 14:08:19.428956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.406 [2024-11-06 14:08:19.440609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.406 [2024-11-06 14:08:19.441046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.406 [2024-11-06 14:08:19.441077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.406 [2024-11-06 14:08:19.441086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.406 [2024-11-06 14:08:19.441262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.406 [2024-11-06 14:08:19.441416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.406 [2024-11-06 14:08:19.441423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.406 [2024-11-06 14:08:19.441429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.406 [2024-11-06 14:08:19.441434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.406 [2024-11-06 14:08:19.453214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.406 [2024-11-06 14:08:19.453815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.406 [2024-11-06 14:08:19.453847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.406 [2024-11-06 14:08:19.453856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.406 [2024-11-06 14:08:19.454022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.406 [2024-11-06 14:08:19.454175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.406 [2024-11-06 14:08:19.454182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.406 [2024-11-06 14:08:19.454191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.406 [2024-11-06 14:08:19.454198] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.406 [2024-11-06 14:08:19.465842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.407 [2024-11-06 14:08:19.466441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.407 [2024-11-06 14:08:19.466473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.407 [2024-11-06 14:08:19.466482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.407 [2024-11-06 14:08:19.466648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.407 [2024-11-06 14:08:19.466801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.407 [2024-11-06 14:08:19.466808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.407 [2024-11-06 14:08:19.466814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.407 [2024-11-06 14:08:19.466820] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.407 [2024-11-06 14:08:19.478465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.407 [2024-11-06 14:08:19.478963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.407 [2024-11-06 14:08:19.478978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.407 [2024-11-06 14:08:19.478984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.407 [2024-11-06 14:08:19.479134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.407 [2024-11-06 14:08:19.479291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.407 [2024-11-06 14:08:19.479298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.407 [2024-11-06 14:08:19.479304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.407 [2024-11-06 14:08:19.479310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.407 [2024-11-06 14:08:19.491093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.407 [2024-11-06 14:08:19.491639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.407 [2024-11-06 14:08:19.491671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.407 [2024-11-06 14:08:19.491680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.407 [2024-11-06 14:08:19.491845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.407 [2024-11-06 14:08:19.491999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.407 [2024-11-06 14:08:19.492006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.407 [2024-11-06 14:08:19.492012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.407 [2024-11-06 14:08:19.492019] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.407 [2024-11-06 14:08:19.503691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.407 [2024-11-06 14:08:19.504279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.407 [2024-11-06 14:08:19.504311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.407 [2024-11-06 14:08:19.504320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.407 [2024-11-06 14:08:19.504486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.407 [2024-11-06 14:08:19.504640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.407 [2024-11-06 14:08:19.504647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.407 [2024-11-06 14:08:19.504653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.407 [2024-11-06 14:08:19.504658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.407 [2024-11-06 14:08:19.516308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.407 [2024-11-06 14:08:19.516939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.407 [2024-11-06 14:08:19.516970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.407 [2024-11-06 14:08:19.516979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.407 [2024-11-06 14:08:19.517144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.407 [2024-11-06 14:08:19.517304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.407 [2024-11-06 14:08:19.517312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.407 [2024-11-06 14:08:19.517318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.407 [2024-11-06 14:08:19.517324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.407 [2024-11-06 14:08:19.528962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.407 [2024-11-06 14:08:19.529459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.407 [2024-11-06 14:08:19.529491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.407 [2024-11-06 14:08:19.529500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.407 [2024-11-06 14:08:19.529667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.407 [2024-11-06 14:08:19.529820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.407 [2024-11-06 14:08:19.529828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.407 [2024-11-06 14:08:19.529834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.407 [2024-11-06 14:08:19.529840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.407 [2024-11-06 14:08:19.541627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.407 [2024-11-06 14:08:19.542165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.407 [2024-11-06 14:08:19.542196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.407 [2024-11-06 14:08:19.542208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.407 [2024-11-06 14:08:19.542383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.407 [2024-11-06 14:08:19.542538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.407 [2024-11-06 14:08:19.542545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.407 [2024-11-06 14:08:19.542551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.407 [2024-11-06 14:08:19.542556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.407 [2024-11-06 14:08:19.554338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.407 [2024-11-06 14:08:19.554723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.407 [2024-11-06 14:08:19.554739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.407 [2024-11-06 14:08:19.554746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.407 [2024-11-06 14:08:19.554896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.407 [2024-11-06 14:08:19.555047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.407 [2024-11-06 14:08:19.555054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.407 [2024-11-06 14:08:19.555059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.407 [2024-11-06 14:08:19.555064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.407 [2024-11-06 14:08:19.566986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.407 [2024-11-06 14:08:19.567541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.407 [2024-11-06 14:08:19.567572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.407 [2024-11-06 14:08:19.567581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.407 [2024-11-06 14:08:19.567747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.407 [2024-11-06 14:08:19.567900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.407 [2024-11-06 14:08:19.567907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.407 [2024-11-06 14:08:19.567913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.407 [2024-11-06 14:08:19.567919] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.407 [2024-11-06 14:08:19.579576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.407 [2024-11-06 14:08:19.580045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.407 [2024-11-06 14:08:19.580061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.407 [2024-11-06 14:08:19.580067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.407 [2024-11-06 14:08:19.580217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.407 [2024-11-06 14:08:19.580378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.408 [2024-11-06 14:08:19.580385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.408 [2024-11-06 14:08:19.580392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.408 [2024-11-06 14:08:19.580397] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.408 [2024-11-06 14:08:19.592174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.408 [2024-11-06 14:08:19.592792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.408 [2024-11-06 14:08:19.592823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.408 [2024-11-06 14:08:19.592832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.408 [2024-11-06 14:08:19.592998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.408 [2024-11-06 14:08:19.593151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.408 [2024-11-06 14:08:19.593158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.408 [2024-11-06 14:08:19.593164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.408 [2024-11-06 14:08:19.593170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.408 [2024-11-06 14:08:19.604815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.408 [2024-11-06 14:08:19.605359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.408 [2024-11-06 14:08:19.605390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.408 [2024-11-06 14:08:19.605399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.408 [2024-11-06 14:08:19.605567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.408 [2024-11-06 14:08:19.605720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.408 [2024-11-06 14:08:19.605727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.408 [2024-11-06 14:08:19.605733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.408 [2024-11-06 14:08:19.605739] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.408 [2024-11-06 14:08:19.617529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.408 [2024-11-06 14:08:19.618065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.408 [2024-11-06 14:08:19.618097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.408 [2024-11-06 14:08:19.618106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.408 [2024-11-06 14:08:19.618281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.408 [2024-11-06 14:08:19.618435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.408 [2024-11-06 14:08:19.618442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.408 [2024-11-06 14:08:19.618451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.408 [2024-11-06 14:08:19.618457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.408 [2024-11-06 14:08:19.630253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.408 [2024-11-06 14:08:19.630807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.408 [2024-11-06 14:08:19.630839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.408 [2024-11-06 14:08:19.630848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.408 [2024-11-06 14:08:19.631013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.408 [2024-11-06 14:08:19.631167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.408 [2024-11-06 14:08:19.631173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.408 [2024-11-06 14:08:19.631179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.408 [2024-11-06 14:08:19.631185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.408 [2024-11-06 14:08:19.642978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.408 [2024-11-06 14:08:19.643582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.408 [2024-11-06 14:08:19.643614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.408 [2024-11-06 14:08:19.643623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.408 [2024-11-06 14:08:19.643788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.408 [2024-11-06 14:08:19.643942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.408 [2024-11-06 14:08:19.643949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.408 [2024-11-06 14:08:19.643954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.408 [2024-11-06 14:08:19.643961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.408 [2024-11-06 14:08:19.655607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.408 [2024-11-06 14:08:19.656063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.408 [2024-11-06 14:08:19.656078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.408 [2024-11-06 14:08:19.656084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.408 [2024-11-06 14:08:19.656234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.408 [2024-11-06 14:08:19.656391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.408 [2024-11-06 14:08:19.656398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.408 [2024-11-06 14:08:19.656404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.408 [2024-11-06 14:08:19.656409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.408 [2024-11-06 14:08:19.668329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.408 [2024-11-06 14:08:19.668781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.408 [2024-11-06 14:08:19.668794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.408 [2024-11-06 14:08:19.668800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.408 [2024-11-06 14:08:19.668949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.408 [2024-11-06 14:08:19.669099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.408 [2024-11-06 14:08:19.669106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.408 [2024-11-06 14:08:19.669112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.408 [2024-11-06 14:08:19.669117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.408 [2024-11-06 14:08:19.681046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.408 [2024-11-06 14:08:19.681601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.408 [2024-11-06 14:08:19.681632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.408 [2024-11-06 14:08:19.681641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.408 [2024-11-06 14:08:19.681807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.408 [2024-11-06 14:08:19.681961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.408 [2024-11-06 14:08:19.681968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.408 [2024-11-06 14:08:19.681974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.408 [2024-11-06 14:08:19.681980] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.670 [2024-11-06 14:08:19.693651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.670 [2024-11-06 14:08:19.694204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.670 [2024-11-06 14:08:19.694235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.670 [2024-11-06 14:08:19.694251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.670 [2024-11-06 14:08:19.694419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.670 [2024-11-06 14:08:19.694572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.670 [2024-11-06 14:08:19.694579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.670 [2024-11-06 14:08:19.694585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.670 [2024-11-06 14:08:19.694591] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.670 [2024-11-06 14:08:19.706404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.670 [2024-11-06 14:08:19.706962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.670 [2024-11-06 14:08:19.706994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.670 [2024-11-06 14:08:19.707007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.670 [2024-11-06 14:08:19.707172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.670 [2024-11-06 14:08:19.707333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.670 [2024-11-06 14:08:19.707341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.670 [2024-11-06 14:08:19.707347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.670 [2024-11-06 14:08:19.707353] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.670 [2024-11-06 14:08:19.719133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.670 [2024-11-06 14:08:19.719736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.670 [2024-11-06 14:08:19.719767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.670 [2024-11-06 14:08:19.719776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.670 [2024-11-06 14:08:19.719942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.670 [2024-11-06 14:08:19.720095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.670 [2024-11-06 14:08:19.720101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.670 [2024-11-06 14:08:19.720107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.670 [2024-11-06 14:08:19.720114] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.670 [2024-11-06 14:08:19.731780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.670 [2024-11-06 14:08:19.732330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.670 [2024-11-06 14:08:19.732362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.670 [2024-11-06 14:08:19.732371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.670 [2024-11-06 14:08:19.732539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.670 [2024-11-06 14:08:19.732693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.670 [2024-11-06 14:08:19.732700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.670 [2024-11-06 14:08:19.732706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.670 [2024-11-06 14:08:19.732714] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.670 [2024-11-06 14:08:19.744371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.670 [2024-11-06 14:08:19.744870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.670 [2024-11-06 14:08:19.744888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.670 [2024-11-06 14:08:19.744894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.670 [2024-11-06 14:08:19.745044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.670 [2024-11-06 14:08:19.745197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.670 [2024-11-06 14:08:19.745204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.670 [2024-11-06 14:08:19.745210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.670 [2024-11-06 14:08:19.745215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.670 [2024-11-06 14:08:19.757002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.670 [2024-11-06 14:08:19.757488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.670 [2024-11-06 14:08:19.757520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.670 [2024-11-06 14:08:19.757529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.670 [2024-11-06 14:08:19.757696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.670 [2024-11-06 14:08:19.757850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.670 [2024-11-06 14:08:19.757857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.670 [2024-11-06 14:08:19.757864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.670 [2024-11-06 14:08:19.757870] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.670 [2024-11-06 14:08:19.769669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.670 [2024-11-06 14:08:19.770254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.670 [2024-11-06 14:08:19.770285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.670 [2024-11-06 14:08:19.770295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.670 [2024-11-06 14:08:19.770463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.670 [2024-11-06 14:08:19.770616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.670 [2024-11-06 14:08:19.770623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.670 [2024-11-06 14:08:19.770629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.671 [2024-11-06 14:08:19.770635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.671 [2024-11-06 14:08:19.782310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.671 [2024-11-06 14:08:19.782909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.671 [2024-11-06 14:08:19.782940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.671 [2024-11-06 14:08:19.782949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.671 [2024-11-06 14:08:19.783114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.671 [2024-11-06 14:08:19.783273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.671 [2024-11-06 14:08:19.783281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.671 [2024-11-06 14:08:19.783292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.671 [2024-11-06 14:08:19.783298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.671 [2024-11-06 14:08:19.794956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.671 [2024-11-06 14:08:19.795319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.671 [2024-11-06 14:08:19.795336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.671 [2024-11-06 14:08:19.795342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.671 [2024-11-06 14:08:19.795493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.671 [2024-11-06 14:08:19.795643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.671 [2024-11-06 14:08:19.795649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.671 [2024-11-06 14:08:19.795655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.671 [2024-11-06 14:08:19.795660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.671 [2024-11-06 14:08:19.807583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.671 [2024-11-06 14:08:19.808036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.671 [2024-11-06 14:08:19.808050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.671 [2024-11-06 14:08:19.808056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.671 [2024-11-06 14:08:19.808206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.671 [2024-11-06 14:08:19.808361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.671 [2024-11-06 14:08:19.808368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.671 [2024-11-06 14:08:19.808374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.671 [2024-11-06 14:08:19.808379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.671 [2024-11-06 14:08:19.820307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.671 [2024-11-06 14:08:19.820756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.671 [2024-11-06 14:08:19.820769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.671 [2024-11-06 14:08:19.820775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.671 [2024-11-06 14:08:19.820925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.671 [2024-11-06 14:08:19.821075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.671 [2024-11-06 14:08:19.821082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.671 [2024-11-06 14:08:19.821087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.671 [2024-11-06 14:08:19.821092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.671 [2024-11-06 14:08:19.833029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.671 [2024-11-06 14:08:19.833449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.671 [2024-11-06 14:08:19.833462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.671 [2024-11-06 14:08:19.833468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.671 [2024-11-06 14:08:19.833618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.671 [2024-11-06 14:08:19.833768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.671 [2024-11-06 14:08:19.833775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.671 [2024-11-06 14:08:19.833780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.671 [2024-11-06 14:08:19.833785] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.671 [2024-11-06 14:08:19.845723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.671 [2024-11-06 14:08:19.846171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.671 [2024-11-06 14:08:19.846184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.671 [2024-11-06 14:08:19.846189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.671 [2024-11-06 14:08:19.846344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.671 [2024-11-06 14:08:19.846495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.671 [2024-11-06 14:08:19.846503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.671 [2024-11-06 14:08:19.846510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.671 [2024-11-06 14:08:19.846516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.671 [2024-11-06 14:08:19.858316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.671 [2024-11-06 14:08:19.858794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.671 [2024-11-06 14:08:19.858807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.671 [2024-11-06 14:08:19.858813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.671 [2024-11-06 14:08:19.858963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.671 [2024-11-06 14:08:19.859113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.671 [2024-11-06 14:08:19.859120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.671 [2024-11-06 14:08:19.859126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.671 [2024-11-06 14:08:19.859131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.671 [2024-11-06 14:08:19.870984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.671 [2024-11-06 14:08:19.871419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.671 [2024-11-06 14:08:19.871433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.671 [2024-11-06 14:08:19.871442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.671 [2024-11-06 14:08:19.871592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.671 [2024-11-06 14:08:19.871742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.671 [2024-11-06 14:08:19.871749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.671 [2024-11-06 14:08:19.871754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.671 [2024-11-06 14:08:19.871760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.671 [2024-11-06 14:08:19.883586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.671 [2024-11-06 14:08:19.884030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.671 [2024-11-06 14:08:19.884044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.671 [2024-11-06 14:08:19.884049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.671 [2024-11-06 14:08:19.884199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.671 [2024-11-06 14:08:19.884355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.671 [2024-11-06 14:08:19.884362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.671 [2024-11-06 14:08:19.884367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.671 [2024-11-06 14:08:19.884373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.671 [2024-11-06 14:08:19.896276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.671 [2024-11-06 14:08:19.896733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.671 [2024-11-06 14:08:19.896748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.671 [2024-11-06 14:08:19.896754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.671 [2024-11-06 14:08:19.896903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.671 [2024-11-06 14:08:19.897053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.671 [2024-11-06 14:08:19.897060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.671 [2024-11-06 14:08:19.897065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.672 [2024-11-06 14:08:19.897070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.672 [2024-11-06 14:08:19.908867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.672 [2024-11-06 14:08:19.909315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.672 [2024-11-06 14:08:19.909328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.672 [2024-11-06 14:08:19.909334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.672 [2024-11-06 14:08:19.909484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.672 [2024-11-06 14:08:19.909637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.672 [2024-11-06 14:08:19.909644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.672 [2024-11-06 14:08:19.909650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.672 [2024-11-06 14:08:19.909655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.672 [2024-11-06 14:08:19.921487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.672 [2024-11-06 14:08:19.921871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.672 [2024-11-06 14:08:19.921886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.672 [2024-11-06 14:08:19.921892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.672 [2024-11-06 14:08:19.922042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.672 [2024-11-06 14:08:19.922192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.672 [2024-11-06 14:08:19.922199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.672 [2024-11-06 14:08:19.922204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.672 [2024-11-06 14:08:19.922209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.672 [2024-11-06 14:08:19.934158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.672 [2024-11-06 14:08:19.934503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.672 [2024-11-06 14:08:19.934519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.672 [2024-11-06 14:08:19.934524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.672 [2024-11-06 14:08:19.934674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.672 [2024-11-06 14:08:19.934824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.672 [2024-11-06 14:08:19.934831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.672 [2024-11-06 14:08:19.934836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.672 [2024-11-06 14:08:19.934841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.672 [2024-11-06 14:08:19.946789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.672 [2024-11-06 14:08:19.947273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.672 [2024-11-06 14:08:19.947287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.672 [2024-11-06 14:08:19.947293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.672 [2024-11-06 14:08:19.947443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.672 [2024-11-06 14:08:19.947593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.672 [2024-11-06 14:08:19.947600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.672 [2024-11-06 14:08:19.947608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.672 [2024-11-06 14:08:19.947613] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.934 [2024-11-06 14:08:19.959412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.934 [2024-11-06 14:08:19.959855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.934 [2024-11-06 14:08:19.959868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.934 [2024-11-06 14:08:19.959874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.934 [2024-11-06 14:08:19.960023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.934 [2024-11-06 14:08:19.960174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.934 [2024-11-06 14:08:19.960181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.934 [2024-11-06 14:08:19.960186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.934 [2024-11-06 14:08:19.960191] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.934 [2024-11-06 14:08:19.972134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.934 [2024-11-06 14:08:19.972585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.934 [2024-11-06 14:08:19.972598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.934 [2024-11-06 14:08:19.972604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.934 [2024-11-06 14:08:19.972754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.934 [2024-11-06 14:08:19.972903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.934 [2024-11-06 14:08:19.972910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.934 [2024-11-06 14:08:19.972916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.934 [2024-11-06 14:08:19.972921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.934 [2024-11-06 14:08:19.984726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.934 [2024-11-06 14:08:19.985215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.934 [2024-11-06 14:08:19.985228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.934 [2024-11-06 14:08:19.985234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.934 [2024-11-06 14:08:19.985390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.934 [2024-11-06 14:08:19.985541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.934 [2024-11-06 14:08:19.985547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.934 [2024-11-06 14:08:19.985553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.934 [2024-11-06 14:08:19.985558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.934 [2024-11-06 14:08:19.997357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.934 [2024-11-06 14:08:19.997832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.934 [2024-11-06 14:08:19.997845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.934 [2024-11-06 14:08:19.997851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.934 [2024-11-06 14:08:19.998000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.934 [2024-11-06 14:08:19.998150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.934 [2024-11-06 14:08:19.998157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.934 [2024-11-06 14:08:19.998163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.934 [2024-11-06 14:08:19.998167] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.934 [2024-11-06 14:08:20.010018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.934 [2024-11-06 14:08:20.010489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.934 [2024-11-06 14:08:20.010504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.934 [2024-11-06 14:08:20.010510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.934 [2024-11-06 14:08:20.010661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.934 [2024-11-06 14:08:20.010811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.934 [2024-11-06 14:08:20.010817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.934 [2024-11-06 14:08:20.010823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.934 [2024-11-06 14:08:20.010828] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.934 [2024-11-06 14:08:20.022659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.934 [2024-11-06 14:08:20.023099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.934 [2024-11-06 14:08:20.023113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.934 [2024-11-06 14:08:20.023119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.934 [2024-11-06 14:08:20.023274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.934 [2024-11-06 14:08:20.023426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.934 [2024-11-06 14:08:20.023432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.934 [2024-11-06 14:08:20.023438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.934 [2024-11-06 14:08:20.023444] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.934 [2024-11-06 14:08:20.035380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.934 [2024-11-06 14:08:20.035854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.935 [2024-11-06 14:08:20.035867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.935 [2024-11-06 14:08:20.035879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.935 [2024-11-06 14:08:20.036029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.935 [2024-11-06 14:08:20.036180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.935 [2024-11-06 14:08:20.036186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.935 [2024-11-06 14:08:20.036191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.935 [2024-11-06 14:08:20.036196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.935 [2024-11-06 14:08:20.047989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.935 [2024-11-06 14:08:20.048432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.935 [2024-11-06 14:08:20.048446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.935 [2024-11-06 14:08:20.048452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.935 [2024-11-06 14:08:20.048603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.935 [2024-11-06 14:08:20.048753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.935 [2024-11-06 14:08:20.048760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.935 [2024-11-06 14:08:20.048766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.935 [2024-11-06 14:08:20.048770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.935 [2024-11-06 14:08:20.060583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.935 [2024-11-06 14:08:20.061081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.935 [2024-11-06 14:08:20.061094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.935 [2024-11-06 14:08:20.061100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.935 [2024-11-06 14:08:20.061255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.935 [2024-11-06 14:08:20.061406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.935 [2024-11-06 14:08:20.061414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.935 [2024-11-06 14:08:20.061419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.935 [2024-11-06 14:08:20.061424] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.935 [2024-11-06 14:08:20.073222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.935 [2024-11-06 14:08:20.073684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.935 [2024-11-06 14:08:20.073698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.935 [2024-11-06 14:08:20.073704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.935 [2024-11-06 14:08:20.073854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.935 [2024-11-06 14:08:20.074004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.935 [2024-11-06 14:08:20.074013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.935 [2024-11-06 14:08:20.074019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.935 [2024-11-06 14:08:20.074024] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.935 [2024-11-06 14:08:20.085845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.935 [2024-11-06 14:08:20.086319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.935 [2024-11-06 14:08:20.086333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.935 [2024-11-06 14:08:20.086338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.935 [2024-11-06 14:08:20.086497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.935 [2024-11-06 14:08:20.086648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.935 [2024-11-06 14:08:20.086655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.935 [2024-11-06 14:08:20.086661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.935 [2024-11-06 14:08:20.086666] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.935 [2024-11-06 14:08:20.098470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.935 [2024-11-06 14:08:20.098934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.935 [2024-11-06 14:08:20.098947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.935 [2024-11-06 14:08:20.098952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.935 [2024-11-06 14:08:20.099102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.935 [2024-11-06 14:08:20.099258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.935 [2024-11-06 14:08:20.099264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.935 [2024-11-06 14:08:20.099270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.935 [2024-11-06 14:08:20.099275] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.935 [2024-11-06 14:08:20.111071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.935 [2024-11-06 14:08:20.111420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.935 [2024-11-06 14:08:20.111434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.935 [2024-11-06 14:08:20.111440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.935 [2024-11-06 14:08:20.111590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.935 [2024-11-06 14:08:20.111740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.935 [2024-11-06 14:08:20.111746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.935 [2024-11-06 14:08:20.111751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.935 [2024-11-06 14:08:20.111759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.935 [2024-11-06 14:08:20.123733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.935 [2024-11-06 14:08:20.124340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.935 [2024-11-06 14:08:20.124373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.935 [2024-11-06 14:08:20.124382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.935 [2024-11-06 14:08:20.124553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.935 [2024-11-06 14:08:20.124706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.935 [2024-11-06 14:08:20.124714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.935 [2024-11-06 14:08:20.124719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.935 [2024-11-06 14:08:20.124725] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.935 [2024-11-06 14:08:20.136384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.935 [2024-11-06 14:08:20.136854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.935 [2024-11-06 14:08:20.136870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.935 [2024-11-06 14:08:20.136876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.935 [2024-11-06 14:08:20.137027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.935 [2024-11-06 14:08:20.137177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.935 [2024-11-06 14:08:20.137184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.935 [2024-11-06 14:08:20.137189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.935 [2024-11-06 14:08:20.137194] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.935 [2024-11-06 14:08:20.148988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.935 [2024-11-06 14:08:20.149596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.935 [2024-11-06 14:08:20.149628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.935 [2024-11-06 14:08:20.149637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.935 [2024-11-06 14:08:20.149804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.935 [2024-11-06 14:08:20.149957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.935 [2024-11-06 14:08:20.149964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.935 [2024-11-06 14:08:20.149970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.935 [2024-11-06 14:08:20.149976] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.935 [2024-11-06 14:08:20.161634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.935 [2024-11-06 14:08:20.162194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.936 [2024-11-06 14:08:20.162226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.936 [2024-11-06 14:08:20.162235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.936 [2024-11-06 14:08:20.162409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.936 [2024-11-06 14:08:20.162563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.936 [2024-11-06 14:08:20.162570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.936 [2024-11-06 14:08:20.162577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.936 [2024-11-06 14:08:20.162583] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.936 [2024-11-06 14:08:20.174236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.936 [2024-11-06 14:08:20.174590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.936 [2024-11-06 14:08:20.174606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.936 [2024-11-06 14:08:20.174612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.936 [2024-11-06 14:08:20.174762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.936 [2024-11-06 14:08:20.174913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.936 [2024-11-06 14:08:20.174920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.936 [2024-11-06 14:08:20.174925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.936 [2024-11-06 14:08:20.174930] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.936 [2024-11-06 14:08:20.186877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.936 [2024-11-06 14:08:20.187314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.936 [2024-11-06 14:08:20.187329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.936 [2024-11-06 14:08:20.187335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.936 [2024-11-06 14:08:20.187485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.936 [2024-11-06 14:08:20.187635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.936 [2024-11-06 14:08:20.187641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.936 [2024-11-06 14:08:20.187647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.936 [2024-11-06 14:08:20.187653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.936 [2024-11-06 14:08:20.199577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.936 [2024-11-06 14:08:20.200022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.936 [2024-11-06 14:08:20.200035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.936 [2024-11-06 14:08:20.200041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.936 [2024-11-06 14:08:20.200194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.936 [2024-11-06 14:08:20.200349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.936 [2024-11-06 14:08:20.200356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.936 [2024-11-06 14:08:20.200362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.936 [2024-11-06 14:08:20.200368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:40.936 [2024-11-06 14:08:20.212290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:40.936 [2024-11-06 14:08:20.212737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.936 [2024-11-06 14:08:20.212751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:40.936 [2024-11-06 14:08:20.212756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:40.936 [2024-11-06 14:08:20.212906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:40.936 [2024-11-06 14:08:20.213056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:40.936 [2024-11-06 14:08:20.213063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:40.936 [2024-11-06 14:08:20.213069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:40.936 [2024-11-06 14:08:20.213074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.197 [2024-11-06 14:08:20.224998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.197 [2024-11-06 14:08:20.225346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.197 [2024-11-06 14:08:20.225360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.197 [2024-11-06 14:08:20.225366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.197 [2024-11-06 14:08:20.225515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.197 [2024-11-06 14:08:20.225666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.197 [2024-11-06 14:08:20.225673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.197 [2024-11-06 14:08:20.225678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.197 [2024-11-06 14:08:20.225684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.197 7949.25 IOPS, 31.05 MiB/s [2024-11-06T13:08:20.481Z] [2024-11-06 14:08:20.237614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.197 [2024-11-06 14:08:20.238105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.197 [2024-11-06 14:08:20.238137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.197 [2024-11-06 14:08:20.238146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.197 [2024-11-06 14:08:20.238318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.197 [2024-11-06 14:08:20.238472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.197 [2024-11-06 14:08:20.238483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.197 [2024-11-06 14:08:20.238490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.197 [2024-11-06 14:08:20.238495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.197 [2024-11-06 14:08:20.250290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.197 [2024-11-06 14:08:20.250762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.197 [2024-11-06 14:08:20.250778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.197 [2024-11-06 14:08:20.250784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.197 [2024-11-06 14:08:20.250934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.197 [2024-11-06 14:08:20.251085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.197 [2024-11-06 14:08:20.251093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.197 [2024-11-06 14:08:20.251098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.197 [2024-11-06 14:08:20.251104] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.197 [2024-11-06 14:08:20.262891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.197 [2024-11-06 14:08:20.263514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.197 [2024-11-06 14:08:20.263546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.197 [2024-11-06 14:08:20.263555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.197 [2024-11-06 14:08:20.263720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.197 [2024-11-06 14:08:20.263874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.197 [2024-11-06 14:08:20.263881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.197 [2024-11-06 14:08:20.263888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.197 [2024-11-06 14:08:20.263894] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.197 [2024-11-06 14:08:20.275553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.197 [2024-11-06 14:08:20.276103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.197 [2024-11-06 14:08:20.276135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.197 [2024-11-06 14:08:20.276144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.197 [2024-11-06 14:08:20.276322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.197 [2024-11-06 14:08:20.276476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.197 [2024-11-06 14:08:20.276483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.197 [2024-11-06 14:08:20.276488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.197 [2024-11-06 14:08:20.276498] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.197 [2024-11-06 14:08:20.288161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.197 [2024-11-06 14:08:20.288671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.197 [2024-11-06 14:08:20.288688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.197 [2024-11-06 14:08:20.288694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.197 [2024-11-06 14:08:20.288844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.197 [2024-11-06 14:08:20.288994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.197 [2024-11-06 14:08:20.289001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.197 [2024-11-06 14:08:20.289006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.197 [2024-11-06 14:08:20.289011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.197 [2024-11-06 14:08:20.300798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.197 [2024-11-06 14:08:20.301256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.197 [2024-11-06 14:08:20.301270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.197 [2024-11-06 14:08:20.301276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.197 [2024-11-06 14:08:20.301426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.197 [2024-11-06 14:08:20.301576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.197 [2024-11-06 14:08:20.301583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.197 [2024-11-06 14:08:20.301588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.197 [2024-11-06 14:08:20.301593] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.197 [2024-11-06 14:08:20.313382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.197 [2024-11-06 14:08:20.313858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.197 [2024-11-06 14:08:20.313871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.197 [2024-11-06 14:08:20.313876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.197 [2024-11-06 14:08:20.314025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.197 [2024-11-06 14:08:20.314175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.197 [2024-11-06 14:08:20.314182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.197 [2024-11-06 14:08:20.314187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.197 [2024-11-06 14:08:20.314192] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.197 [2024-11-06 14:08:20.325976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.197 [2024-11-06 14:08:20.326490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.198 [2024-11-06 14:08:20.326503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.198 [2024-11-06 14:08:20.326510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.198 [2024-11-06 14:08:20.326659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.198 [2024-11-06 14:08:20.326810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.198 [2024-11-06 14:08:20.326817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.198 [2024-11-06 14:08:20.326822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.198 [2024-11-06 14:08:20.326827] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.198 [2024-11-06 14:08:20.338645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.198 [2024-11-06 14:08:20.339103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.198 [2024-11-06 14:08:20.339117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.198 [2024-11-06 14:08:20.339123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.198 [2024-11-06 14:08:20.339276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.198 [2024-11-06 14:08:20.339428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.198 [2024-11-06 14:08:20.339434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.198 [2024-11-06 14:08:20.339440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.198 [2024-11-06 14:08:20.339445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.198 [2024-11-06 14:08:20.351362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.198 [2024-11-06 14:08:20.351834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.198 [2024-11-06 14:08:20.351848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.198 [2024-11-06 14:08:20.351854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.198 [2024-11-06 14:08:20.352003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.198 [2024-11-06 14:08:20.352154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.198 [2024-11-06 14:08:20.352160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.198 [2024-11-06 14:08:20.352165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.198 [2024-11-06 14:08:20.352170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.198 [2024-11-06 14:08:20.363953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.198 [2024-11-06 14:08:20.364407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.198 [2024-11-06 14:08:20.364421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.198 [2024-11-06 14:08:20.364426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.198 [2024-11-06 14:08:20.364578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.198 [2024-11-06 14:08:20.364729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.198 [2024-11-06 14:08:20.364735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.198 [2024-11-06 14:08:20.364740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.198 [2024-11-06 14:08:20.364745] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.198 [2024-11-06 14:08:20.376661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.198 [2024-11-06 14:08:20.377207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.198 [2024-11-06 14:08:20.377239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.198 [2024-11-06 14:08:20.377261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.198 [2024-11-06 14:08:20.377428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.198 [2024-11-06 14:08:20.377582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.198 [2024-11-06 14:08:20.377589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.198 [2024-11-06 14:08:20.377595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.198 [2024-11-06 14:08:20.377601] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.198 [2024-11-06 14:08:20.389396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.198 [2024-11-06 14:08:20.389988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.198 [2024-11-06 14:08:20.390020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.198 [2024-11-06 14:08:20.390029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.198 [2024-11-06 14:08:20.390195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.198 [2024-11-06 14:08:20.390354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.198 [2024-11-06 14:08:20.390362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.198 [2024-11-06 14:08:20.390368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.198 [2024-11-06 14:08:20.390375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.198 [2024-11-06 14:08:20.402046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.198 [2024-11-06 14:08:20.402617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.198 [2024-11-06 14:08:20.402648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.198 [2024-11-06 14:08:20.402657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.198 [2024-11-06 14:08:20.402825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.198 [2024-11-06 14:08:20.402978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.198 [2024-11-06 14:08:20.402989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.198 [2024-11-06 14:08:20.402995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.198 [2024-11-06 14:08:20.403002] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.198 [2024-11-06 14:08:20.414654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.198 [2024-11-06 14:08:20.415255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.198 [2024-11-06 14:08:20.415287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.198 [2024-11-06 14:08:20.415296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.198 [2024-11-06 14:08:20.415463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.198 [2024-11-06 14:08:20.415617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.198 [2024-11-06 14:08:20.415624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.198 [2024-11-06 14:08:20.415630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.198 [2024-11-06 14:08:20.415636] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.198 [2024-11-06 14:08:20.427302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.198 [2024-11-06 14:08:20.427931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.198 [2024-11-06 14:08:20.427963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.198 [2024-11-06 14:08:20.427972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.198 [2024-11-06 14:08:20.428139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.198 [2024-11-06 14:08:20.428299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.198 [2024-11-06 14:08:20.428307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.198 [2024-11-06 14:08:20.428312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.198 [2024-11-06 14:08:20.428318] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.198 [2024-11-06 14:08:20.439960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.198 [2024-11-06 14:08:20.440550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.198 [2024-11-06 14:08:20.440582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.198 [2024-11-06 14:08:20.440591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.198 [2024-11-06 14:08:20.440757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.198 [2024-11-06 14:08:20.440910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.198 [2024-11-06 14:08:20.440917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.198 [2024-11-06 14:08:20.440923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.198 [2024-11-06 14:08:20.440933] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.198 [2024-11-06 14:08:20.452588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.198 [2024-11-06 14:08:20.453206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.199 [2024-11-06 14:08:20.453237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.199 [2024-11-06 14:08:20.453253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.199 [2024-11-06 14:08:20.453419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.199 [2024-11-06 14:08:20.453572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.199 [2024-11-06 14:08:20.453579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.199 [2024-11-06 14:08:20.453585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.199 [2024-11-06 14:08:20.453591] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.199 [2024-11-06 14:08:20.465234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.199 [2024-11-06 14:08:20.465768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.199 [2024-11-06 14:08:20.465799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.199 [2024-11-06 14:08:20.465808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.199 [2024-11-06 14:08:20.465974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.199 [2024-11-06 14:08:20.466128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.199 [2024-11-06 14:08:20.466135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.199 [2024-11-06 14:08:20.466141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.199 [2024-11-06 14:08:20.466147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.199 [2024-11-06 14:08:20.477950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.199 [2024-11-06 14:08:20.478542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.199 [2024-11-06 14:08:20.478574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.199 [2024-11-06 14:08:20.478584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.199 [2024-11-06 14:08:20.478752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.199 [2024-11-06 14:08:20.478905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.199 [2024-11-06 14:08:20.478912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.199 [2024-11-06 14:08:20.478918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.199 [2024-11-06 14:08:20.478925] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.502 [2024-11-06 14:08:20.490597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.502 [2024-11-06 14:08:20.491076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.502 [2024-11-06 14:08:20.491095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.502 [2024-11-06 14:08:20.491101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.502 [2024-11-06 14:08:20.491256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.502 [2024-11-06 14:08:20.491408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.502 [2024-11-06 14:08:20.491415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.502 [2024-11-06 14:08:20.491421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.502 [2024-11-06 14:08:20.491426] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.502 [2024-11-06 14:08:20.503212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.502 [2024-11-06 14:08:20.503716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.502 [2024-11-06 14:08:20.503730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.502 [2024-11-06 14:08:20.503735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.502 [2024-11-06 14:08:20.503885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.502 [2024-11-06 14:08:20.504036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.502 [2024-11-06 14:08:20.504043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.502 [2024-11-06 14:08:20.504048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.502 [2024-11-06 14:08:20.504053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.502 [2024-11-06 14:08:20.515836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.502 [2024-11-06 14:08:20.516455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.502 [2024-11-06 14:08:20.516487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.502 [2024-11-06 14:08:20.516496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.502 [2024-11-06 14:08:20.516662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.502 [2024-11-06 14:08:20.516815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.503 [2024-11-06 14:08:20.516822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.503 [2024-11-06 14:08:20.516827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.503 [2024-11-06 14:08:20.516835] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.503 [2024-11-06 14:08:20.528483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.503 [2024-11-06 14:08:20.529067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.503 [2024-11-06 14:08:20.529099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.503 [2024-11-06 14:08:20.529108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.503 [2024-11-06 14:08:20.529283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.503 [2024-11-06 14:08:20.529437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.503 [2024-11-06 14:08:20.529444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.503 [2024-11-06 14:08:20.529450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.503 [2024-11-06 14:08:20.529456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.503 [2024-11-06 14:08:20.541119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.503 [2024-11-06 14:08:20.541691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.503 [2024-11-06 14:08:20.541722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.503 [2024-11-06 14:08:20.541731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.503 [2024-11-06 14:08:20.541897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.503 [2024-11-06 14:08:20.542050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.503 [2024-11-06 14:08:20.542057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.503 [2024-11-06 14:08:20.542063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.503 [2024-11-06 14:08:20.542069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.503 [2024-11-06 14:08:20.553719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.503 [2024-11-06 14:08:20.554087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.503 [2024-11-06 14:08:20.554105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.503 [2024-11-06 14:08:20.554111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.503 [2024-11-06 14:08:20.554270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.503 [2024-11-06 14:08:20.554423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.503 [2024-11-06 14:08:20.554430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.503 [2024-11-06 14:08:20.554436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.503 [2024-11-06 14:08:20.554441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.503 [2024-11-06 14:08:20.566366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.503 [2024-11-06 14:08:20.566907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.503 [2024-11-06 14:08:20.566938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.503 [2024-11-06 14:08:20.566947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.503 [2024-11-06 14:08:20.567112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.503 [2024-11-06 14:08:20.567273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.503 [2024-11-06 14:08:20.567285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.503 [2024-11-06 14:08:20.567290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.503 [2024-11-06 14:08:20.567296] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.503 [2024-11-06 14:08:20.579087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.503 [2024-11-06 14:08:20.579656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.503 [2024-11-06 14:08:20.579687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.503 [2024-11-06 14:08:20.579696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.503 [2024-11-06 14:08:20.579862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.503 [2024-11-06 14:08:20.580016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.503 [2024-11-06 14:08:20.580023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.503 [2024-11-06 14:08:20.580029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.503 [2024-11-06 14:08:20.580035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.503 [2024-11-06 14:08:20.591692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.503 [2024-11-06 14:08:20.592237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.503 [2024-11-06 14:08:20.592274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.503 [2024-11-06 14:08:20.592283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.503 [2024-11-06 14:08:20.592450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.503 [2024-11-06 14:08:20.592604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.503 [2024-11-06 14:08:20.592611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.503 [2024-11-06 14:08:20.592616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.503 [2024-11-06 14:08:20.592622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.503 [2024-11-06 14:08:20.604412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.503 [2024-11-06 14:08:20.605073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.503 [2024-11-06 14:08:20.605105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.503 [2024-11-06 14:08:20.605114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.503 [2024-11-06 14:08:20.605287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.503 [2024-11-06 14:08:20.605441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.503 [2024-11-06 14:08:20.605448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.503 [2024-11-06 14:08:20.605454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.503 [2024-11-06 14:08:20.605460] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.503 [2024-11-06 14:08:20.617112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.503 [2024-11-06 14:08:20.617701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.503 [2024-11-06 14:08:20.617733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.503 [2024-11-06 14:08:20.617742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.503 [2024-11-06 14:08:20.617909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.503 [2024-11-06 14:08:20.618063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.503 [2024-11-06 14:08:20.618070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.503 [2024-11-06 14:08:20.618076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.503 [2024-11-06 14:08:20.618082] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.503 [2024-11-06 14:08:20.629736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.503 [2024-11-06 14:08:20.630231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.503 [2024-11-06 14:08:20.630251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.503 [2024-11-06 14:08:20.630257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.503 [2024-11-06 14:08:20.630408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.503 [2024-11-06 14:08:20.630558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.503 [2024-11-06 14:08:20.630565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.503 [2024-11-06 14:08:20.630570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.503 [2024-11-06 14:08:20.630575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.503 [2024-11-06 14:08:20.642355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.503 [2024-11-06 14:08:20.642809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.503 [2024-11-06 14:08:20.642822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.503 [2024-11-06 14:08:20.642828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.504 [2024-11-06 14:08:20.642978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.504 [2024-11-06 14:08:20.643128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.504 [2024-11-06 14:08:20.643134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.504 [2024-11-06 14:08:20.643139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.504 [2024-11-06 14:08:20.643144] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.504 [2024-11-06 14:08:20.655066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.504 [2024-11-06 14:08:20.655543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.504 [2024-11-06 14:08:20.655581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.504 [2024-11-06 14:08:20.655590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.504 [2024-11-06 14:08:20.655758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.504 [2024-11-06 14:08:20.655912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.504 [2024-11-06 14:08:20.655919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.504 [2024-11-06 14:08:20.655925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.504 [2024-11-06 14:08:20.655931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.504 [2024-11-06 14:08:20.667717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.504 [2024-11-06 14:08:20.668316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.504 [2024-11-06 14:08:20.668347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.504 [2024-11-06 14:08:20.668356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.504 [2024-11-06 14:08:20.668522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.504 [2024-11-06 14:08:20.668675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.504 [2024-11-06 14:08:20.668682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.504 [2024-11-06 14:08:20.668688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.504 [2024-11-06 14:08:20.668694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.504 [2024-11-06 14:08:20.680350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.504 [2024-11-06 14:08:20.680908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.504 [2024-11-06 14:08:20.680939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.504 [2024-11-06 14:08:20.680948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.504 [2024-11-06 14:08:20.681113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.504 [2024-11-06 14:08:20.681273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.504 [2024-11-06 14:08:20.681281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.504 [2024-11-06 14:08:20.681287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.504 [2024-11-06 14:08:20.681293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.504 [2024-11-06 14:08:20.692945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.504 [2024-11-06 14:08:20.693534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.504 [2024-11-06 14:08:20.693565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.504 [2024-11-06 14:08:20.693574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.504 [2024-11-06 14:08:20.693743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.504 [2024-11-06 14:08:20.693896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.504 [2024-11-06 14:08:20.693904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.504 [2024-11-06 14:08:20.693910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.504 [2024-11-06 14:08:20.693916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.504 [2024-11-06 14:08:20.705562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.504 [2024-11-06 14:08:20.706045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.504 [2024-11-06 14:08:20.706077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.504 [2024-11-06 14:08:20.706086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.504 [2024-11-06 14:08:20.706260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.504 [2024-11-06 14:08:20.706414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.504 [2024-11-06 14:08:20.706421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.504 [2024-11-06 14:08:20.706427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.504 [2024-11-06 14:08:20.706432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.504 [2024-11-06 14:08:20.718214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.504 [2024-11-06 14:08:20.718819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.504 [2024-11-06 14:08:20.718851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.504 [2024-11-06 14:08:20.718860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.504 [2024-11-06 14:08:20.719025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.504 [2024-11-06 14:08:20.719178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.504 [2024-11-06 14:08:20.719186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.504 [2024-11-06 14:08:20.719192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.504 [2024-11-06 14:08:20.719198] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.504 [2024-11-06 14:08:20.730844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.504 [2024-11-06 14:08:20.731480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.504 [2024-11-06 14:08:20.731512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.504 [2024-11-06 14:08:20.731521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.504 [2024-11-06 14:08:20.731686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.504 [2024-11-06 14:08:20.731839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.504 [2024-11-06 14:08:20.731847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.504 [2024-11-06 14:08:20.731856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.504 [2024-11-06 14:08:20.731862] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.504 [2024-11-06 14:08:20.743513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.504 [2024-11-06 14:08:20.744300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.504 [2024-11-06 14:08:20.744320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.504 [2024-11-06 14:08:20.744326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.504 [2024-11-06 14:08:20.744483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.504 [2024-11-06 14:08:20.744634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.504 [2024-11-06 14:08:20.744641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.504 [2024-11-06 14:08:20.744647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.504 [2024-11-06 14:08:20.744653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.504 [2024-11-06 14:08:20.756156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.504 [2024-11-06 14:08:20.756743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.504 [2024-11-06 14:08:20.756775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.504 [2024-11-06 14:08:20.756784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.504 [2024-11-06 14:08:20.756950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.504 [2024-11-06 14:08:20.757103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.504 [2024-11-06 14:08:20.757111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.504 [2024-11-06 14:08:20.757116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.504 [2024-11-06 14:08:20.757122] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.504 [2024-11-06 14:08:20.768775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.504 [2024-11-06 14:08:20.769234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.504 [2024-11-06 14:08:20.769254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.505 [2024-11-06 14:08:20.769261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.505 [2024-11-06 14:08:20.769412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.505 [2024-11-06 14:08:20.769562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.505 [2024-11-06 14:08:20.769569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.505 [2024-11-06 14:08:20.769574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.505 [2024-11-06 14:08:20.769580] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.841 [2024-11-06 14:08:20.781372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.841 [2024-11-06 14:08:20.781852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.841 [2024-11-06 14:08:20.781865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.841 [2024-11-06 14:08:20.781871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.841 [2024-11-06 14:08:20.782021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.841 [2024-11-06 14:08:20.782171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.841 [2024-11-06 14:08:20.782177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.841 [2024-11-06 14:08:20.782183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.841 [2024-11-06 14:08:20.782189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.841 [2024-11-06 14:08:20.793980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.841 [2024-11-06 14:08:20.794331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.841 [2024-11-06 14:08:20.794345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.841 [2024-11-06 14:08:20.794351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.841 [2024-11-06 14:08:20.794501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.841 [2024-11-06 14:08:20.794652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.841 [2024-11-06 14:08:20.794658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.841 [2024-11-06 14:08:20.794663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.841 [2024-11-06 14:08:20.794668] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.841 [2024-11-06 14:08:20.806584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.841 [2024-11-06 14:08:20.807073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.841 [2024-11-06 14:08:20.807086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.842 [2024-11-06 14:08:20.807092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.842 [2024-11-06 14:08:20.807241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.842 [2024-11-06 14:08:20.807397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.842 [2024-11-06 14:08:20.807403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.842 [2024-11-06 14:08:20.807409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.842 [2024-11-06 14:08:20.807414] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.842 [2024-11-06 14:08:20.819182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.842 [2024-11-06 14:08:20.819763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.842 [2024-11-06 14:08:20.819794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.842 [2024-11-06 14:08:20.819806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.842 [2024-11-06 14:08:20.819971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.842 [2024-11-06 14:08:20.820124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.842 [2024-11-06 14:08:20.820130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.842 [2024-11-06 14:08:20.820135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.842 [2024-11-06 14:08:20.820141] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.842 [2024-11-06 14:08:20.831787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.842 [2024-11-06 14:08:20.832221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.842 [2024-11-06 14:08:20.832236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.842 [2024-11-06 14:08:20.832242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.842 [2024-11-06 14:08:20.832397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.842 [2024-11-06 14:08:20.832547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.842 [2024-11-06 14:08:20.832553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.842 [2024-11-06 14:08:20.832558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.842 [2024-11-06 14:08:20.832563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.842 [2024-11-06 14:08:20.844476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.842 [2024-11-06 14:08:20.845043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.842 [2024-11-06 14:08:20.845074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.842 [2024-11-06 14:08:20.845083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.842 [2024-11-06 14:08:20.845254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.842 [2024-11-06 14:08:20.845408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.842 [2024-11-06 14:08:20.845414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.842 [2024-11-06 14:08:20.845420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.842 [2024-11-06 14:08:20.845425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.842 [2024-11-06 14:08:20.857069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.842 [2024-11-06 14:08:20.857634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.842 [2024-11-06 14:08:20.857665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.842 [2024-11-06 14:08:20.857674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.842 [2024-11-06 14:08:20.857839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.842 [2024-11-06 14:08:20.857996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.842 [2024-11-06 14:08:20.858002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.842 [2024-11-06 14:08:20.858008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.842 [2024-11-06 14:08:20.858014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.842 [2024-11-06 14:08:20.869669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.842 [2024-11-06 14:08:20.870232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.842 [2024-11-06 14:08:20.870268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.842 [2024-11-06 14:08:20.870277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.842 [2024-11-06 14:08:20.870443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.842 [2024-11-06 14:08:20.870598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.842 [2024-11-06 14:08:20.870605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.842 [2024-11-06 14:08:20.870611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.842 [2024-11-06 14:08:20.870617] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.842 [2024-11-06 14:08:20.882276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.842 [2024-11-06 14:08:20.882834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.842 [2024-11-06 14:08:20.882864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.842 [2024-11-06 14:08:20.882873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.842 [2024-11-06 14:08:20.883039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.842 [2024-11-06 14:08:20.883191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.842 [2024-11-06 14:08:20.883197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.842 [2024-11-06 14:08:20.883202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.842 [2024-11-06 14:08:20.883208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.842 [2024-11-06 14:08:20.894874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.842 [2024-11-06 14:08:20.895345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.842 [2024-11-06 14:08:20.895376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.842 [2024-11-06 14:08:20.895385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.842 [2024-11-06 14:08:20.895553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.842 [2024-11-06 14:08:20.895706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.842 [2024-11-06 14:08:20.895712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.842 [2024-11-06 14:08:20.895721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.842 [2024-11-06 14:08:20.895727] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.842 [2024-11-06 14:08:20.907521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.842 [2024-11-06 14:08:20.908011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.842 [2024-11-06 14:08:20.908026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.842 [2024-11-06 14:08:20.908031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.842 [2024-11-06 14:08:20.908181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.842 [2024-11-06 14:08:20.908337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.842 [2024-11-06 14:08:20.908343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.842 [2024-11-06 14:08:20.908348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.842 [2024-11-06 14:08:20.908353] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.842 [2024-11-06 14:08:20.920112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.842 [2024-11-06 14:08:20.920578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.842 [2024-11-06 14:08:20.920592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.843 [2024-11-06 14:08:20.920598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.843 [2024-11-06 14:08:20.920748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.843 [2024-11-06 14:08:20.920897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.843 [2024-11-06 14:08:20.920903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.843 [2024-11-06 14:08:20.920908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.843 [2024-11-06 14:08:20.920913] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.843 [2024-11-06 14:08:20.932701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.843 [2024-11-06 14:08:20.933155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.843 [2024-11-06 14:08:20.933167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.843 [2024-11-06 14:08:20.933173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.843 [2024-11-06 14:08:20.933327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.843 [2024-11-06 14:08:20.933477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.843 [2024-11-06 14:08:20.933483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.843 [2024-11-06 14:08:20.933488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.843 [2024-11-06 14:08:20.933493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.843 [2024-11-06 14:08:20.945287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.843 [2024-11-06 14:08:20.945828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.843 [2024-11-06 14:08:20.945858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.843 [2024-11-06 14:08:20.945867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.843 [2024-11-06 14:08:20.946033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.843 [2024-11-06 14:08:20.946186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.843 [2024-11-06 14:08:20.946192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.843 [2024-11-06 14:08:20.946197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.843 [2024-11-06 14:08:20.946203] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.843 [2024-11-06 14:08:20.958010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.843 [2024-11-06 14:08:20.958573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.843 [2024-11-06 14:08:20.958603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.843 [2024-11-06 14:08:20.958612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.843 [2024-11-06 14:08:20.958778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.843 [2024-11-06 14:08:20.958931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.843 [2024-11-06 14:08:20.958937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.843 [2024-11-06 14:08:20.958943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.843 [2024-11-06 14:08:20.958949] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.843 [2024-11-06 14:08:20.970737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.843 [2024-11-06 14:08:20.971307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.843 [2024-11-06 14:08:20.971337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.843 [2024-11-06 14:08:20.971346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.843 [2024-11-06 14:08:20.971511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.843 [2024-11-06 14:08:20.971664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.843 [2024-11-06 14:08:20.971670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.843 [2024-11-06 14:08:20.971676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.843 [2024-11-06 14:08:20.971681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.843 [2024-11-06 14:08:20.983337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.843 [2024-11-06 14:08:20.983903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.843 [2024-11-06 14:08:20.983932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.843 [2024-11-06 14:08:20.983944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.843 [2024-11-06 14:08:20.984109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.843 [2024-11-06 14:08:20.984269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.843 [2024-11-06 14:08:20.984276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.843 [2024-11-06 14:08:20.984282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.843 [2024-11-06 14:08:20.984287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.843 [2024-11-06 14:08:20.995935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.843 [2024-11-06 14:08:20.996494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.843 [2024-11-06 14:08:20.996525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.843 [2024-11-06 14:08:20.996534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.843 [2024-11-06 14:08:20.996699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.843 [2024-11-06 14:08:20.996852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.843 [2024-11-06 14:08:20.996858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.843 [2024-11-06 14:08:20.996864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.843 [2024-11-06 14:08:20.996869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.843 [2024-11-06 14:08:21.008553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.843 [2024-11-06 14:08:21.009035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.843 [2024-11-06 14:08:21.009050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.843 [2024-11-06 14:08:21.009056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.843 [2024-11-06 14:08:21.009206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.843 [2024-11-06 14:08:21.009362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.843 [2024-11-06 14:08:21.009368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.843 [2024-11-06 14:08:21.009373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.843 [2024-11-06 14:08:21.009378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.843 [2024-11-06 14:08:21.021148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.843 [2024-11-06 14:08:21.021706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.843 [2024-11-06 14:08:21.021737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.843 [2024-11-06 14:08:21.021746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.843 [2024-11-06 14:08:21.021911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.843 [2024-11-06 14:08:21.022067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.843 [2024-11-06 14:08:21.022074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.843 [2024-11-06 14:08:21.022079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.843 [2024-11-06 14:08:21.022085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.843 [2024-11-06 14:08:21.033733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.843 [2024-11-06 14:08:21.034202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.843 [2024-11-06 14:08:21.034216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.843 [2024-11-06 14:08:21.034222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.843 [2024-11-06 14:08:21.034377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.844 [2024-11-06 14:08:21.034527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.844 [2024-11-06 14:08:21.034533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.844 [2024-11-06 14:08:21.034538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.844 [2024-11-06 14:08:21.034543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.844 [2024-11-06 14:08:21.046448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.844 [2024-11-06 14:08:21.046927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.844 [2024-11-06 14:08:21.046940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.844 [2024-11-06 14:08:21.046945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.844 [2024-11-06 14:08:21.047094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.844 [2024-11-06 14:08:21.047248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.844 [2024-11-06 14:08:21.047254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.844 [2024-11-06 14:08:21.047259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.844 [2024-11-06 14:08:21.047264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.844 [2024-11-06 14:08:21.059033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.844 [2024-11-06 14:08:21.059574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.844 [2024-11-06 14:08:21.059604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.844 [2024-11-06 14:08:21.059613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.844 [2024-11-06 14:08:21.059778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.844 [2024-11-06 14:08:21.059930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.844 [2024-11-06 14:08:21.059936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.844 [2024-11-06 14:08:21.059945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.844 [2024-11-06 14:08:21.059951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.844 [2024-11-06 14:08:21.071740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.844 [2024-11-06 14:08:21.072323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.844 [2024-11-06 14:08:21.072359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.844 [2024-11-06 14:08:21.072367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.844 [2024-11-06 14:08:21.072532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.844 [2024-11-06 14:08:21.072684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.844 [2024-11-06 14:08:21.072690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.844 [2024-11-06 14:08:21.072696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.844 [2024-11-06 14:08:21.072701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.844 [2024-11-06 14:08:21.084356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.844 [2024-11-06 14:08:21.084927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.844 [2024-11-06 14:08:21.084958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.844 [2024-11-06 14:08:21.084966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.844 [2024-11-06 14:08:21.085132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.844 [2024-11-06 14:08:21.085292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.844 [2024-11-06 14:08:21.085299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.844 [2024-11-06 14:08:21.085304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.844 [2024-11-06 14:08:21.085310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.844 [2024-11-06 14:08:21.096958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.844 [2024-11-06 14:08:21.097430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.844 [2024-11-06 14:08:21.097460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.844 [2024-11-06 14:08:21.097468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.844 [2024-11-06 14:08:21.097634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.844 [2024-11-06 14:08:21.097786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.844 [2024-11-06 14:08:21.097792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.844 [2024-11-06 14:08:21.097798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.844 [2024-11-06 14:08:21.097803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.844 [2024-11-06 14:08:21.109622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.844 [2024-11-06 14:08:21.110181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.844 [2024-11-06 14:08:21.110211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.844 [2024-11-06 14:08:21.110220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.844 [2024-11-06 14:08:21.110394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.844 [2024-11-06 14:08:21.110548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.844 [2024-11-06 14:08:21.110554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.844 [2024-11-06 14:08:21.110560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.844 [2024-11-06 14:08:21.110566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:41.844 [2024-11-06 14:08:21.122225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:41.844 [2024-11-06 14:08:21.122824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.844 [2024-11-06 14:08:21.122855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:41.844 [2024-11-06 14:08:21.122864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:41.844 [2024-11-06 14:08:21.123029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:41.844 [2024-11-06 14:08:21.123182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:41.844 [2024-11-06 14:08:21.123188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:41.844 [2024-11-06 14:08:21.123193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:41.844 [2024-11-06 14:08:21.123199] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.106 [2024-11-06 14:08:21.134856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.106 [2024-11-06 14:08:21.135435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.106 [2024-11-06 14:08:21.135466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.106 [2024-11-06 14:08:21.135474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.106 [2024-11-06 14:08:21.135642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.106 [2024-11-06 14:08:21.135795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.106 [2024-11-06 14:08:21.135801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.106 [2024-11-06 14:08:21.135807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.106 [2024-11-06 14:08:21.135813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.106 [2024-11-06 14:08:21.147477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.106 [2024-11-06 14:08:21.147961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.106 [2024-11-06 14:08:21.147976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.106 [2024-11-06 14:08:21.147985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.106 [2024-11-06 14:08:21.148135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.106 [2024-11-06 14:08:21.148290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.106 [2024-11-06 14:08:21.148297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.106 [2024-11-06 14:08:21.148302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.106 [2024-11-06 14:08:21.148307] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.106 [2024-11-06 14:08:21.160094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.106 [2024-11-06 14:08:21.160588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.106 [2024-11-06 14:08:21.160603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.106 [2024-11-06 14:08:21.160608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.106 [2024-11-06 14:08:21.160759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.106 [2024-11-06 14:08:21.160908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.106 [2024-11-06 14:08:21.160914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.106 [2024-11-06 14:08:21.160919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.106 [2024-11-06 14:08:21.160924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.106 [2024-11-06 14:08:21.172696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.106 [2024-11-06 14:08:21.173179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.106 [2024-11-06 14:08:21.173191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.106 [2024-11-06 14:08:21.173197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.106 [2024-11-06 14:08:21.173351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.106 [2024-11-06 14:08:21.173501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.106 [2024-11-06 14:08:21.173507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.106 [2024-11-06 14:08:21.173512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.106 [2024-11-06 14:08:21.173517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.106 [2024-11-06 14:08:21.185294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.106 [2024-11-06 14:08:21.185840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.106 [2024-11-06 14:08:21.185871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.106 [2024-11-06 14:08:21.185879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.106 [2024-11-06 14:08:21.186045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.106 [2024-11-06 14:08:21.186206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.106 [2024-11-06 14:08:21.186213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.106 [2024-11-06 14:08:21.186218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.106 [2024-11-06 14:08:21.186224] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.107 [2024-11-06 14:08:21.198018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.107 [2024-11-06 14:08:21.198588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.107 [2024-11-06 14:08:21.198618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.107 [2024-11-06 14:08:21.198627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.107 [2024-11-06 14:08:21.198795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.107 [2024-11-06 14:08:21.198948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.107 [2024-11-06 14:08:21.198954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.107 [2024-11-06 14:08:21.198960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.107 [2024-11-06 14:08:21.198965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.107 [2024-11-06 14:08:21.210612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.107 [2024-11-06 14:08:21.211172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.107 [2024-11-06 14:08:21.211202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.107 [2024-11-06 14:08:21.211211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.107 [2024-11-06 14:08:21.211386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.107 [2024-11-06 14:08:21.211540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.107 [2024-11-06 14:08:21.211546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.107 [2024-11-06 14:08:21.211552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.107 [2024-11-06 14:08:21.211557] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.107 [2024-11-06 14:08:21.223202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.107 [2024-11-06 14:08:21.223753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.107 [2024-11-06 14:08:21.223783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.107 [2024-11-06 14:08:21.223792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.107 [2024-11-06 14:08:21.223957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.107 [2024-11-06 14:08:21.224110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.107 [2024-11-06 14:08:21.224116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.107 [2024-11-06 14:08:21.224125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.107 [2024-11-06 14:08:21.224131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.107 6359.40 IOPS, 24.84 MiB/s [2024-11-06T13:08:21.391Z] [2024-11-06 14:08:21.235916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.107 [2024-11-06 14:08:21.236510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.107 [2024-11-06 14:08:21.236541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.107 [2024-11-06 14:08:21.236550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.107 [2024-11-06 14:08:21.236715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.107 [2024-11-06 14:08:21.236868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.107 [2024-11-06 14:08:21.236874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.107 [2024-11-06 14:08:21.236879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.107 [2024-11-06 14:08:21.236885] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.107 [2024-11-06 14:08:21.248535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.107 [2024-11-06 14:08:21.249077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.107 [2024-11-06 14:08:21.249108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.107 [2024-11-06 14:08:21.249117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.107 [2024-11-06 14:08:21.249290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.107 [2024-11-06 14:08:21.249443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.107 [2024-11-06 14:08:21.249450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.107 [2024-11-06 14:08:21.249455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.107 [2024-11-06 14:08:21.249460] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.107 [2024-11-06 14:08:21.261123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.107 [2024-11-06 14:08:21.261605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.107 [2024-11-06 14:08:21.261635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.107 [2024-11-06 14:08:21.261644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.107 [2024-11-06 14:08:21.261809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.107 [2024-11-06 14:08:21.261962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.107 [2024-11-06 14:08:21.261968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.107 [2024-11-06 14:08:21.261973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.107 [2024-11-06 14:08:21.261979] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.107 [2024-11-06 14:08:21.273765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.107 [2024-11-06 14:08:21.274355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.107 [2024-11-06 14:08:21.274386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.107 [2024-11-06 14:08:21.274395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.107 [2024-11-06 14:08:21.274560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.107 [2024-11-06 14:08:21.274712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.107 [2024-11-06 14:08:21.274719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.107 [2024-11-06 14:08:21.274725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.107 [2024-11-06 14:08:21.274731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.107 [2024-11-06 14:08:21.286393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.107 [2024-11-06 14:08:21.286958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.107 [2024-11-06 14:08:21.286988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.107 [2024-11-06 14:08:21.286996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.107 [2024-11-06 14:08:21.287162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.107 [2024-11-06 14:08:21.287322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.107 [2024-11-06 14:08:21.287329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.107 [2024-11-06 14:08:21.287335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.107 [2024-11-06 14:08:21.287340] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.107 [2024-11-06 14:08:21.298991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.107 [2024-11-06 14:08:21.299565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.107 [2024-11-06 14:08:21.299596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.107 [2024-11-06 14:08:21.299605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.107 [2024-11-06 14:08:21.299770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.107 [2024-11-06 14:08:21.299923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.107 [2024-11-06 14:08:21.299929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.107 [2024-11-06 14:08:21.299935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.107 [2024-11-06 14:08:21.299940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.107 [2024-11-06 14:08:21.311585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.107 [2024-11-06 14:08:21.312144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.107 [2024-11-06 14:08:21.312174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.107 [2024-11-06 14:08:21.312186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.107 [2024-11-06 14:08:21.312360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.107 [2024-11-06 14:08:21.312513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.107 [2024-11-06 14:08:21.312520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.108 [2024-11-06 14:08:21.312526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.108 [2024-11-06 14:08:21.312532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.108 [2024-11-06 14:08:21.324179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.108 [2024-11-06 14:08:21.324632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.108 [2024-11-06 14:08:21.324648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.108 [2024-11-06 14:08:21.324654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.108 [2024-11-06 14:08:21.324803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.108 [2024-11-06 14:08:21.324953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.108 [2024-11-06 14:08:21.324958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.108 [2024-11-06 14:08:21.324964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.108 [2024-11-06 14:08:21.324968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.108 [2024-11-06 14:08:21.336897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.108 [2024-11-06 14:08:21.337444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.108 [2024-11-06 14:08:21.337474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.108 [2024-11-06 14:08:21.337483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.108 [2024-11-06 14:08:21.337648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.108 [2024-11-06 14:08:21.337801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.108 [2024-11-06 14:08:21.337808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.108 [2024-11-06 14:08:21.337813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.108 [2024-11-06 14:08:21.337818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.108 [2024-11-06 14:08:21.349628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.108 [2024-11-06 14:08:21.350224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.108 [2024-11-06 14:08:21.350260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.108 [2024-11-06 14:08:21.350269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.108 [2024-11-06 14:08:21.350434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.108 [2024-11-06 14:08:21.350591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.108 [2024-11-06 14:08:21.350597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.108 [2024-11-06 14:08:21.350602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.108 [2024-11-06 14:08:21.350608] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.108 [2024-11-06 14:08:21.362271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.108 [2024-11-06 14:08:21.362729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.108 [2024-11-06 14:08:21.362744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.108 [2024-11-06 14:08:21.362750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.108 [2024-11-06 14:08:21.362900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.108 [2024-11-06 14:08:21.363049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.108 [2024-11-06 14:08:21.363055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.108 [2024-11-06 14:08:21.363060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.108 [2024-11-06 14:08:21.363065] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.108 [2024-11-06 14:08:21.374899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.108 [2024-11-06 14:08:21.375358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.108 [2024-11-06 14:08:21.375389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.108 [2024-11-06 14:08:21.375397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.108 [2024-11-06 14:08:21.375565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.108 [2024-11-06 14:08:21.375717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.108 [2024-11-06 14:08:21.375724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.108 [2024-11-06 14:08:21.375729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.108 [2024-11-06 14:08:21.375735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.108 [2024-11-06 14:08:21.387544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.108 [2024-11-06 14:08:21.388031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.108 [2024-11-06 14:08:21.388047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.108 [2024-11-06 14:08:21.388053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.108 [2024-11-06 14:08:21.388203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.108 [2024-11-06 14:08:21.388359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.108 [2024-11-06 14:08:21.388365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.108 [2024-11-06 14:08:21.388374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.108 [2024-11-06 14:08:21.388379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.369 [2024-11-06 14:08:21.400210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.369 [2024-11-06 14:08:21.400701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.369 [2024-11-06 14:08:21.400716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.370 [2024-11-06 14:08:21.400722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.370 [2024-11-06 14:08:21.400871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.370 [2024-11-06 14:08:21.401021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.370 [2024-11-06 14:08:21.401026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.370 [2024-11-06 14:08:21.401032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.370 [2024-11-06 14:08:21.401036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.370 [2024-11-06 14:08:21.412819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.370 [2024-11-06 14:08:21.413337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.370 [2024-11-06 14:08:21.413368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.370 [2024-11-06 14:08:21.413377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.370 [2024-11-06 14:08:21.413545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.370 [2024-11-06 14:08:21.413698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.370 [2024-11-06 14:08:21.413705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.370 [2024-11-06 14:08:21.413710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.370 [2024-11-06 14:08:21.413716] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.370 [2024-11-06 14:08:21.425517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.370 [2024-11-06 14:08:21.425981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.370 [2024-11-06 14:08:21.425996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.370 [2024-11-06 14:08:21.426002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.370 [2024-11-06 14:08:21.426152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.370 [2024-11-06 14:08:21.426305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.370 [2024-11-06 14:08:21.426312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.370 [2024-11-06 14:08:21.426317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.370 [2024-11-06 14:08:21.426322] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.370 [2024-11-06 14:08:21.438106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.370 [2024-11-06 14:08:21.438459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.370 [2024-11-06 14:08:21.438473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.370 [2024-11-06 14:08:21.438479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.370 [2024-11-06 14:08:21.438628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.370 [2024-11-06 14:08:21.438777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.370 [2024-11-06 14:08:21.438784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.370 [2024-11-06 14:08:21.438789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.370 [2024-11-06 14:08:21.438794] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.370 [2024-11-06 14:08:21.450730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.370 [2024-11-06 14:08:21.451309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.370 [2024-11-06 14:08:21.451340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.370 [2024-11-06 14:08:21.451349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.370 [2024-11-06 14:08:21.451516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.370 [2024-11-06 14:08:21.451669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.370 [2024-11-06 14:08:21.451675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.370 [2024-11-06 14:08:21.451681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.370 [2024-11-06 14:08:21.451686] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.370 [2024-11-06 14:08:21.463346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.370 [2024-11-06 14:08:21.463931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.370 [2024-11-06 14:08:21.463960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.370 [2024-11-06 14:08:21.463969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.370 [2024-11-06 14:08:21.464134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.370 [2024-11-06 14:08:21.464294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.370 [2024-11-06 14:08:21.464301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.370 [2024-11-06 14:08:21.464307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.370 [2024-11-06 14:08:21.464313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.370 [2024-11-06 14:08:21.475970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.370 [2024-11-06 14:08:21.476578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.370 [2024-11-06 14:08:21.476609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.370 [2024-11-06 14:08:21.476621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.370 [2024-11-06 14:08:21.476786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.370 [2024-11-06 14:08:21.476939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.370 [2024-11-06 14:08:21.476945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.370 [2024-11-06 14:08:21.476950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.370 [2024-11-06 14:08:21.476956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.370 [2024-11-06 14:08:21.488620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.370 [2024-11-06 14:08:21.489188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.370 [2024-11-06 14:08:21.489219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.370 [2024-11-06 14:08:21.489227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.370 [2024-11-06 14:08:21.489399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.370 [2024-11-06 14:08:21.489552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.370 [2024-11-06 14:08:21.489558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.370 [2024-11-06 14:08:21.489564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.370 [2024-11-06 14:08:21.489570] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.370 [2024-11-06 14:08:21.501234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.370 [2024-11-06 14:08:21.501693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.370 [2024-11-06 14:08:21.501708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.370 [2024-11-06 14:08:21.501714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.370 [2024-11-06 14:08:21.501864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.370 [2024-11-06 14:08:21.502013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.370 [2024-11-06 14:08:21.502019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.370 [2024-11-06 14:08:21.502024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.370 [2024-11-06 14:08:21.502028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.370 [2024-11-06 14:08:21.513958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.370 [2024-11-06 14:08:21.514527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.370 [2024-11-06 14:08:21.514557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.370 [2024-11-06 14:08:21.514566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.370 [2024-11-06 14:08:21.514731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.370 [2024-11-06 14:08:21.514884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.370 [2024-11-06 14:08:21.514893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.370 [2024-11-06 14:08:21.514899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.370 [2024-11-06 14:08:21.514904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.370 [2024-11-06 14:08:21.526573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.370 [2024-11-06 14:08:21.527111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.371 [2024-11-06 14:08:21.527142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.371 [2024-11-06 14:08:21.527151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.371 [2024-11-06 14:08:21.527324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.371 [2024-11-06 14:08:21.527477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.371 [2024-11-06 14:08:21.527483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.371 [2024-11-06 14:08:21.527489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.371 [2024-11-06 14:08:21.527495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.371 [2024-11-06 14:08:21.539289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.371 [2024-11-06 14:08:21.539853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.371 [2024-11-06 14:08:21.539883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.371 [2024-11-06 14:08:21.539892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.371 [2024-11-06 14:08:21.540060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.371 [2024-11-06 14:08:21.540212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.371 [2024-11-06 14:08:21.540218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.371 [2024-11-06 14:08:21.540224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.371 [2024-11-06 14:08:21.540230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.371 [2024-11-06 14:08:21.551888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.371 [2024-11-06 14:08:21.552517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.371 [2024-11-06 14:08:21.552548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.371 [2024-11-06 14:08:21.552557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.371 [2024-11-06 14:08:21.552722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.371 [2024-11-06 14:08:21.552875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.371 [2024-11-06 14:08:21.552881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.371 [2024-11-06 14:08:21.552887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.371 [2024-11-06 14:08:21.552896] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.371 [2024-11-06 14:08:21.564558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.371 [2024-11-06 14:08:21.565017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.371 [2024-11-06 14:08:21.565031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.371 [2024-11-06 14:08:21.565037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.371 [2024-11-06 14:08:21.565187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.371 [2024-11-06 14:08:21.565342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.371 [2024-11-06 14:08:21.565349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.371 [2024-11-06 14:08:21.565354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.371 [2024-11-06 14:08:21.565359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.371 [2024-11-06 14:08:21.577305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.371 [2024-11-06 14:08:21.577840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.371 [2024-11-06 14:08:21.577871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.371 [2024-11-06 14:08:21.577879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.371 [2024-11-06 14:08:21.578045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.371 [2024-11-06 14:08:21.578198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.371 [2024-11-06 14:08:21.578204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.371 [2024-11-06 14:08:21.578209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.371 [2024-11-06 14:08:21.578215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.371 [2024-11-06 14:08:21.590027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.371 [2024-11-06 14:08:21.590498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.371 [2024-11-06 14:08:21.590514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.371 [2024-11-06 14:08:21.590520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.371 [2024-11-06 14:08:21.590670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.371 [2024-11-06 14:08:21.590819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.371 [2024-11-06 14:08:21.590826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.371 [2024-11-06 14:08:21.590831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.371 [2024-11-06 14:08:21.590835] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.371 [2024-11-06 14:08:21.602633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.371 [2024-11-06 14:08:21.603110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.371 [2024-11-06 14:08:21.603123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.371 [2024-11-06 14:08:21.603129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.371 [2024-11-06 14:08:21.603282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.371 [2024-11-06 14:08:21.603432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.371 [2024-11-06 14:08:21.603438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.371 [2024-11-06 14:08:21.603443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.371 [2024-11-06 14:08:21.603448] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.371 [2024-11-06 14:08:21.615231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.371 [2024-11-06 14:08:21.615795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.371 [2024-11-06 14:08:21.615825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.371 [2024-11-06 14:08:21.615834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.371 [2024-11-06 14:08:21.615999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.371 [2024-11-06 14:08:21.616152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.371 [2024-11-06 14:08:21.616159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.371 [2024-11-06 14:08:21.616164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.371 [2024-11-06 14:08:21.616170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.371 [2024-11-06 14:08:21.627825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.371 [2024-11-06 14:08:21.628257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.371 [2024-11-06 14:08:21.628273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.371 [2024-11-06 14:08:21.628279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.371 [2024-11-06 14:08:21.628429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.371 [2024-11-06 14:08:21.628579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.371 [2024-11-06 14:08:21.628584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.371 [2024-11-06 14:08:21.628589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.371 [2024-11-06 14:08:21.628594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.371 [2024-11-06 14:08:21.640520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.371 [2024-11-06 14:08:21.640974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.371 [2024-11-06 14:08:21.640987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.371 [2024-11-06 14:08:21.640992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.371 [2024-11-06 14:08:21.641145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.371 [2024-11-06 14:08:21.641300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.371 [2024-11-06 14:08:21.641306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.371 [2024-11-06 14:08:21.641312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.371 [2024-11-06 14:08:21.641316] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.646 [2024-11-06 14:08:21.653241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.646 [2024-11-06 14:08:21.653720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.646 [2024-11-06 14:08:21.653732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.646 [2024-11-06 14:08:21.653738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.646 [2024-11-06 14:08:21.653888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.646 [2024-11-06 14:08:21.654037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.646 [2024-11-06 14:08:21.654043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.646 [2024-11-06 14:08:21.654048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.646 [2024-11-06 14:08:21.654053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.646 [2024-11-06 14:08:21.665839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.647 [2024-11-06 14:08:21.666317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.647 [2024-11-06 14:08:21.666330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.647 [2024-11-06 14:08:21.666335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.647 [2024-11-06 14:08:21.666485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.647 [2024-11-06 14:08:21.666634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.647 [2024-11-06 14:08:21.666639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.647 [2024-11-06 14:08:21.666644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.647 [2024-11-06 14:08:21.666649] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.647 [2024-11-06 14:08:21.678431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.647 [2024-11-06 14:08:21.678868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.647 [2024-11-06 14:08:21.678881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.647 [2024-11-06 14:08:21.678886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.647 [2024-11-06 14:08:21.679036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.647 [2024-11-06 14:08:21.679185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.647 [2024-11-06 14:08:21.679193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.647 [2024-11-06 14:08:21.679198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.647 [2024-11-06 14:08:21.679203] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.647 [2024-11-06 14:08:21.691141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.647 [2024-11-06 14:08:21.691613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.647 [2024-11-06 14:08:21.691628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.647 [2024-11-06 14:08:21.691634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.647 [2024-11-06 14:08:21.691783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.647 [2024-11-06 14:08:21.691932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.647 [2024-11-06 14:08:21.691939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.647 [2024-11-06 14:08:21.691944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.647 [2024-11-06 14:08:21.691949] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.647 [2024-11-06 14:08:21.703737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.647 [2024-11-06 14:08:21.704338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.647 [2024-11-06 14:08:21.704368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.647 [2024-11-06 14:08:21.704377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.647 [2024-11-06 14:08:21.704546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.647 [2024-11-06 14:08:21.704698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.647 [2024-11-06 14:08:21.704704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.647 [2024-11-06 14:08:21.704710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.647 [2024-11-06 14:08:21.704716] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.647 [2024-11-06 14:08:21.716373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.647 [2024-11-06 14:08:21.716938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.647 [2024-11-06 14:08:21.716969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.647 [2024-11-06 14:08:21.716977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.647 [2024-11-06 14:08:21.717143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.647 [2024-11-06 14:08:21.717302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.647 [2024-11-06 14:08:21.717308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.647 [2024-11-06 14:08:21.717314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.647 [2024-11-06 14:08:21.717323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.647 [2024-11-06 14:08:21.728973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.647 [2024-11-06 14:08:21.729447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.647 [2024-11-06 14:08:21.729463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.647 [2024-11-06 14:08:21.729469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.647 [2024-11-06 14:08:21.729619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.647 [2024-11-06 14:08:21.729768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.647 [2024-11-06 14:08:21.729774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.647 [2024-11-06 14:08:21.729779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.647 [2024-11-06 14:08:21.729784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.647 [2024-11-06 14:08:21.741573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.647 [2024-11-06 14:08:21.742090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.647 [2024-11-06 14:08:21.742120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.647 [2024-11-06 14:08:21.742129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.647 [2024-11-06 14:08:21.742300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.647 [2024-11-06 14:08:21.742453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.647 [2024-11-06 14:08:21.742460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.647 [2024-11-06 14:08:21.742465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.647 [2024-11-06 14:08:21.742470] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.647 [2024-11-06 14:08:21.754262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.647 [2024-11-06 14:08:21.754768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.647 [2024-11-06 14:08:21.754783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.647 [2024-11-06 14:08:21.754788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.647 [2024-11-06 14:08:21.754938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.647 [2024-11-06 14:08:21.755088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.647 [2024-11-06 14:08:21.755094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.647 [2024-11-06 14:08:21.755099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.647 [2024-11-06 14:08:21.755103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.647 [2024-11-06 14:08:21.766893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.647 [2024-11-06 14:08:21.767350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.647 [2024-11-06 14:08:21.767367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.647 [2024-11-06 14:08:21.767373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.647 [2024-11-06 14:08:21.767522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.647 [2024-11-06 14:08:21.767672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.647 [2024-11-06 14:08:21.767678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.647 [2024-11-06 14:08:21.767683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.647 [2024-11-06 14:08:21.767687] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.647 [2024-11-06 14:08:21.779615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.647 [2024-11-06 14:08:21.780086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.647 [2024-11-06 14:08:21.780098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.647 [2024-11-06 14:08:21.780104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.647 [2024-11-06 14:08:21.780258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.647 [2024-11-06 14:08:21.780408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.647 [2024-11-06 14:08:21.780413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.647 [2024-11-06 14:08:21.780418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.647 [2024-11-06 14:08:21.780423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.647 [2024-11-06 14:08:21.792250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.648 [2024-11-06 14:08:21.792813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.648 [2024-11-06 14:08:21.792843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.648 [2024-11-06 14:08:21.792852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.648 [2024-11-06 14:08:21.793018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.648 [2024-11-06 14:08:21.793170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.648 [2024-11-06 14:08:21.793176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.648 [2024-11-06 14:08:21.793182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.648 [2024-11-06 14:08:21.793188] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.648 [2024-11-06 14:08:21.804848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.648 [2024-11-06 14:08:21.805477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.648 [2024-11-06 14:08:21.805508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.648 [2024-11-06 14:08:21.805517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.648 [2024-11-06 14:08:21.805688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.648 [2024-11-06 14:08:21.805841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.648 [2024-11-06 14:08:21.805847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.648 [2024-11-06 14:08:21.805853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.648 [2024-11-06 14:08:21.805858] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.648 [2024-11-06 14:08:21.817518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.648 [2024-11-06 14:08:21.818014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.648 [2024-11-06 14:08:21.818029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.648 [2024-11-06 14:08:21.818035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.648 [2024-11-06 14:08:21.818185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.648 [2024-11-06 14:08:21.818339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.648 [2024-11-06 14:08:21.818345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.648 [2024-11-06 14:08:21.818350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.648 [2024-11-06 14:08:21.818355] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.648 [2024-11-06 14:08:21.830140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.648 [2024-11-06 14:08:21.830684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.648 [2024-11-06 14:08:21.830715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.648 [2024-11-06 14:08:21.830723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.648 [2024-11-06 14:08:21.830889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.648 [2024-11-06 14:08:21.831042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.648 [2024-11-06 14:08:21.831048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.648 [2024-11-06 14:08:21.831053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.648 [2024-11-06 14:08:21.831059] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.648 [2024-11-06 14:08:21.842867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.648 [2024-11-06 14:08:21.843374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.648 [2024-11-06 14:08:21.843405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.648 [2024-11-06 14:08:21.843414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.648 [2024-11-06 14:08:21.843582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.648 [2024-11-06 14:08:21.843735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.648 [2024-11-06 14:08:21.843745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.648 [2024-11-06 14:08:21.843750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.648 [2024-11-06 14:08:21.843756] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1037136 Killed "${NVMF_APP[@]}" "$@" 00:24:42.648 14:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:24:42.648 14:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:42.648 14:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:42.648 14:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:42.648 14:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:42.648 14:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1039152 00:24:42.648 14:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1039152 00:24:42.648 14:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 1039152 ']' 00:24:42.648 14:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.648 14:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:42.648 14:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.648 14:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:42.648 14:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:42.648 14:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:42.648 [2024-11-06 14:08:21.855558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.648 [2024-11-06 14:08:21.856035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.648 [2024-11-06 14:08:21.856050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.648 [2024-11-06 14:08:21.856055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.648 [2024-11-06 14:08:21.856206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.648 [2024-11-06 14:08:21.856362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.648 [2024-11-06 14:08:21.856368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.648 [2024-11-06 14:08:21.856374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.648 [2024-11-06 14:08:21.856379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.648 [2024-11-06 14:08:21.868161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.648 [2024-11-06 14:08:21.868530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.648 [2024-11-06 14:08:21.868542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.648 [2024-11-06 14:08:21.868548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.648 [2024-11-06 14:08:21.868698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.648 [2024-11-06 14:08:21.868851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.648 [2024-11-06 14:08:21.868857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.648 [2024-11-06 14:08:21.868862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.648 [2024-11-06 14:08:21.868866] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.648 [2024-11-06 14:08:21.880783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.648 [2024-11-06 14:08:21.881290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.648 [2024-11-06 14:08:21.881303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.648 [2024-11-06 14:08:21.881309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.648 [2024-11-06 14:08:21.881458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.648 [2024-11-06 14:08:21.881608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.648 [2024-11-06 14:08:21.881615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.648 [2024-11-06 14:08:21.881621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.648 [2024-11-06 14:08:21.881626] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.648 [2024-11-06 14:08:21.887923] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:24:42.648 [2024-11-06 14:08:21.887969] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:42.648 [2024-11-06 14:08:21.893421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.648 [2024-11-06 14:08:21.893883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.648 [2024-11-06 14:08:21.893896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.649 [2024-11-06 14:08:21.893903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.649 [2024-11-06 14:08:21.894054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.649 [2024-11-06 14:08:21.894204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.649 [2024-11-06 14:08:21.894210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.649 [2024-11-06 14:08:21.894216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.649 [2024-11-06 14:08:21.894221] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.649 [2024-11-06 14:08:21.906003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.649 [2024-11-06 14:08:21.906456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.649 [2024-11-06 14:08:21.906469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.649 [2024-11-06 14:08:21.906474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.649 [2024-11-06 14:08:21.906624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.649 [2024-11-06 14:08:21.906777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.649 [2024-11-06 14:08:21.906783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.649 [2024-11-06 14:08:21.906788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.649 [2024-11-06 14:08:21.906793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.649 [2024-11-06 14:08:21.918715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.649 [2024-11-06 14:08:21.919169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.649 [2024-11-06 14:08:21.919182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.649 [2024-11-06 14:08:21.919187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.649 [2024-11-06 14:08:21.919341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.649 [2024-11-06 14:08:21.919491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.649 [2024-11-06 14:08:21.919497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.649 [2024-11-06 14:08:21.919502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.649 [2024-11-06 14:08:21.919507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.910 [2024-11-06 14:08:21.931299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.910 [2024-11-06 14:08:21.931643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.910 [2024-11-06 14:08:21.931657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.910 [2024-11-06 14:08:21.931663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.910 [2024-11-06 14:08:21.931813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.910 [2024-11-06 14:08:21.931963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.910 [2024-11-06 14:08:21.931969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.910 [2024-11-06 14:08:21.931974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.910 [2024-11-06 14:08:21.931980] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.910 [2024-11-06 14:08:21.943913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.910 [2024-11-06 14:08:21.944347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.910 [2024-11-06 14:08:21.944361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.910 [2024-11-06 14:08:21.944366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.910 [2024-11-06 14:08:21.944516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.910 [2024-11-06 14:08:21.944666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.910 [2024-11-06 14:08:21.944672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.910 [2024-11-06 14:08:21.944680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.910 [2024-11-06 14:08:21.944685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.910 [2024-11-06 14:08:21.956568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.910 [2024-11-06 14:08:21.957008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.910 [2024-11-06 14:08:21.957022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.910 [2024-11-06 14:08:21.957028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.910 [2024-11-06 14:08:21.957177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.910 [2024-11-06 14:08:21.957331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.910 [2024-11-06 14:08:21.957337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.910 [2024-11-06 14:08:21.957342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.910 [2024-11-06 14:08:21.957347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.910 [2024-11-06 14:08:21.959912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:42.910 [2024-11-06 14:08:21.969272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.910 [2024-11-06 14:08:21.969656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.910 [2024-11-06 14:08:21.969670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.910 [2024-11-06 14:08:21.969676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.910 [2024-11-06 14:08:21.969826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.910 [2024-11-06 14:08:21.969975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.910 [2024-11-06 14:08:21.969980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.910 [2024-11-06 14:08:21.969986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.911 [2024-11-06 14:08:21.969991] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.911 [2024-11-06 14:08:21.981922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.911 [2024-11-06 14:08:21.982473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.911 [2024-11-06 14:08:21.982506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.911 [2024-11-06 14:08:21.982516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.911 [2024-11-06 14:08:21.982691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.911 [2024-11-06 14:08:21.982844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.911 [2024-11-06 14:08:21.982850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.911 [2024-11-06 14:08:21.982856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.911 [2024-11-06 14:08:21.982862] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.911 [2024-11-06 14:08:21.989022] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:42.911 [2024-11-06 14:08:21.989043] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:42.911 [2024-11-06 14:08:21.989050] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:42.911 [2024-11-06 14:08:21.989057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:42.911 [2024-11-06 14:08:21.989061] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:42.911 [2024-11-06 14:08:21.990159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:42.911 [2024-11-06 14:08:21.990323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:42.911 [2024-11-06 14:08:21.990547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:42.911 [2024-11-06 14:08:21.994587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.911 [2024-11-06 14:08:21.995262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.911 [2024-11-06 14:08:21.995293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.911 [2024-11-06 14:08:21.995302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.911 [2024-11-06 14:08:21.995473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.911 [2024-11-06 14:08:21.995626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.911 [2024-11-06 14:08:21.995632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.911 [2024-11-06 14:08:21.995638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.911 [2024-11-06 14:08:21.995644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.911 [2024-11-06 14:08:22.007302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.911 [2024-11-06 14:08:22.007857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.911 [2024-11-06 14:08:22.007872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.911 [2024-11-06 14:08:22.007878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.911 [2024-11-06 14:08:22.008028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.911 [2024-11-06 14:08:22.008178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.911 [2024-11-06 14:08:22.008184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.911 [2024-11-06 14:08:22.008189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.911 [2024-11-06 14:08:22.008195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.911 [2024-11-06 14:08:22.019978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.911 [2024-11-06 14:08:22.020449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.911 [2024-11-06 14:08:22.020481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.911 [2024-11-06 14:08:22.020490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.911 [2024-11-06 14:08:22.020661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.911 [2024-11-06 14:08:22.020819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.911 [2024-11-06 14:08:22.020825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.911 [2024-11-06 14:08:22.020831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.911 [2024-11-06 14:08:22.020837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.911 [2024-11-06 14:08:22.032676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.911 [2024-11-06 14:08:22.033208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.911 [2024-11-06 14:08:22.033239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.911 [2024-11-06 14:08:22.033255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.911 [2024-11-06 14:08:22.033426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.911 [2024-11-06 14:08:22.033579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.911 [2024-11-06 14:08:22.033585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.911 [2024-11-06 14:08:22.033591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.911 [2024-11-06 14:08:22.033597] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.911 [2024-11-06 14:08:22.045382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.911 [2024-11-06 14:08:22.045980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.911 [2024-11-06 14:08:22.046010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.911 [2024-11-06 14:08:22.046020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.911 [2024-11-06 14:08:22.046187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.911 [2024-11-06 14:08:22.046347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.911 [2024-11-06 14:08:22.046354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.911 [2024-11-06 14:08:22.046360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.911 [2024-11-06 14:08:22.046365] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.911 [2024-11-06 14:08:22.058011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.911 [2024-11-06 14:08:22.058592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.911 [2024-11-06 14:08:22.058623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.911 [2024-11-06 14:08:22.058632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.911 [2024-11-06 14:08:22.058798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.911 [2024-11-06 14:08:22.058950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.911 [2024-11-06 14:08:22.058957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.911 [2024-11-06 14:08:22.058968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.911 [2024-11-06 14:08:22.058974] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.911 14:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:42.911 14:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:24:42.911 14:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:42.911 14:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:42.911 14:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:42.911 [2024-11-06 14:08:22.070629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.911 [2024-11-06 14:08:22.071136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.911 [2024-11-06 14:08:22.071151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.911 [2024-11-06 14:08:22.071157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.911 [2024-11-06 14:08:22.071313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.911 [2024-11-06 14:08:22.071463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.911 [2024-11-06 14:08:22.071469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.911 [2024-11-06 14:08:22.071475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.911 [2024-11-06 14:08:22.071480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.911 [2024-11-06 14:08:22.083279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.911 [2024-11-06 14:08:22.083821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.911 [2024-11-06 14:08:22.083852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.911 [2024-11-06 14:08:22.083861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.911 [2024-11-06 14:08:22.084027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.911 [2024-11-06 14:08:22.084180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.912 [2024-11-06 14:08:22.084187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.912 [2024-11-06 14:08:22.084192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.912 [2024-11-06 14:08:22.084198] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.912 14:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:42.912 14:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:42.912 14:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.912 14:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:42.912 [2024-11-06 14:08:22.093814] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:42.912 [2024-11-06 14:08:22.095864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.912 [2024-11-06 14:08:22.096352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.912 [2024-11-06 14:08:22.096371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.912 [2024-11-06 14:08:22.096378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.912 [2024-11-06 14:08:22.096528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.912 [2024-11-06 14:08:22.096678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.912 [2024-11-06 14:08:22.096683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.912 [2024-11-06 14:08:22.096689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.912 [2024-11-06 14:08:22.096694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.912 14:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.912 14:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:42.912 14:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.912 14:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:42.912 [2024-11-06 14:08:22.108475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.912 [2024-11-06 14:08:22.109036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.912 [2024-11-06 14:08:22.109067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.912 [2024-11-06 14:08:22.109076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.912 [2024-11-06 14:08:22.109242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.912 [2024-11-06 14:08:22.109402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.912 [2024-11-06 14:08:22.109409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.912 [2024-11-06 14:08:22.109414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.912 [2024-11-06 14:08:22.109420] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.912 [2024-11-06 14:08:22.121066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.912 Malloc0 00:24:42.912 [2024-11-06 14:08:22.121723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.912 [2024-11-06 14:08:22.121754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.912 [2024-11-06 14:08:22.121764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.912 14:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.912 [2024-11-06 14:08:22.121930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.912 [2024-11-06 14:08:22.122084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.912 [2024-11-06 14:08:22.122090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.912 [2024-11-06 14:08:22.122095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.912 [2024-11-06 14:08:22.122101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.912 14:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:42.912 14:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.912 14:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:42.912 14:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.912 14:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:42.912 14:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.912 14:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:42.912 [2024-11-06 14:08:22.133751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.912 [2024-11-06 14:08:22.134341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.912 [2024-11-06 14:08:22.134371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:24:42.912 [2024-11-06 14:08:22.134380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:24:42.912 [2024-11-06 14:08:22.134548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:24:42.912 [2024-11-06 14:08:22.134701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:42.912 [2024-11-06 14:08:22.134708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:42.912 [2024-11-06 14:08:22.134714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:42.912 [2024-11-06 14:08:22.134720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:42.912 14:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.912 14:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:42.912 14:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.912 14:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:42.912 [2024-11-06 14:08:22.140765] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:42.912 14:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.912 14:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1037825 00:24:42.912 [2024-11-06 14:08:22.146380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:42.912 [2024-11-06 14:08:22.171775] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:24:44.110 5421.67 IOPS, 21.18 MiB/s [2024-11-06T13:08:24.332Z] 6514.00 IOPS, 25.45 MiB/s [2024-11-06T13:08:25.269Z] 7327.75 IOPS, 28.62 MiB/s [2024-11-06T13:08:26.644Z] 7948.89 IOPS, 31.05 MiB/s [2024-11-06T13:08:27.580Z] 8453.50 IOPS, 33.02 MiB/s [2024-11-06T13:08:28.515Z] 8870.36 IOPS, 34.65 MiB/s [2024-11-06T13:08:29.449Z] 9217.92 IOPS, 36.01 MiB/s [2024-11-06T13:08:30.384Z] 9502.23 IOPS, 37.12 MiB/s [2024-11-06T13:08:31.318Z] 9753.43 IOPS, 38.10 MiB/s [2024-11-06T13:08:31.318Z] 9966.20 IOPS, 38.93 MiB/s 00:24:52.034 Latency(us) 00:24:52.034 [2024-11-06T13:08:31.318Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:52.034 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:52.034 Verification LBA range: start 0x0 length 0x4000 00:24:52.034 Nvme1n1 : 15.01 9969.23 38.94 11952.97 0.00 5820.68 559.79 17148.59 00:24:52.034 [2024-11-06T13:08:31.318Z] =================================================================================================================== 00:24:52.034 [2024-11-06T13:08:31.318Z] Total : 9969.23 38.94 11952.97 0.00 5820.68 559.79 17148.59 00:24:52.293 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:24:52.293 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:52.293 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.293 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:52.293 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.293 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:24:52.293 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:24:52.293 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:52.293 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:24:52.293 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:52.293 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:24:52.293 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:52.293 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:52.293 rmmod nvme_tcp 00:24:52.293 rmmod nvme_fabrics 00:24:52.293 rmmod nvme_keyring 00:24:52.293 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:52.293 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:24:52.293 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:24:52.293 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1039152 ']' 00:24:52.293 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1039152 00:24:52.293 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 1039152 ']' 00:24:52.293 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 1039152 00:24:52.293 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:24:52.293 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:52.293 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1039152 00:24:52.293 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:52.293 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:52.293 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1039152' 00:24:52.293 killing process with pid 1039152 00:24:52.293 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 1039152 00:24:52.293 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 1039152 00:24:52.552 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:52.552 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:52.552 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:52.552 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:24:52.552 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:24:52.552 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:52.552 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:24:52.552 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:52.552 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:52.552 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.552 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:52.552 14:08:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.453 14:08:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:54.453 00:24:54.453 real 0m25.721s 00:24:54.453 user 1m1.887s 00:24:54.453 sys 0m5.776s 00:24:54.453 14:08:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:54.453 14:08:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:54.453 ************************************ 00:24:54.453 END TEST nvmf_bdevperf 00:24:54.453 ************************************ 00:24:54.453 14:08:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:54.453 14:08:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:54.453 14:08:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:54.453 14:08:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.453 ************************************ 00:24:54.453 START TEST nvmf_target_disconnect 00:24:54.453 ************************************ 00:24:54.453 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:54.453 * Looking for test storage... 00:24:54.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:54.453 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:54.712 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:24:54.712 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:54.712 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:54.712 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:54.712 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:54.712 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:54.712 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:24:54.712 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:24:54.712 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:24:54.712 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:24:54.712 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:24:54.712 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:24:54.712 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:24:54.712 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:54.712 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:24:54.712 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:24:54.712 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:54.712 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:54.712 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:24:54.712 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:24:54.712 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:54.712 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:24:54.712 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:24:54.712 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:24:54.712 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:24:54.712 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:54.712 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:24:54.712 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:24:54.712 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:54.712 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:54.712 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:24:54.712 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:54.712 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:54.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.712 --rc genhtml_branch_coverage=1 00:24:54.712 --rc genhtml_function_coverage=1 00:24:54.712 --rc genhtml_legend=1 00:24:54.712 --rc geninfo_all_blocks=1 00:24:54.712 --rc geninfo_unexecuted_blocks=1 00:24:54.712 00:24:54.712 ' 00:24:54.712 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:54.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.713 --rc genhtml_branch_coverage=1 00:24:54.713 --rc genhtml_function_coverage=1 00:24:54.713 --rc genhtml_legend=1 00:24:54.713 --rc geninfo_all_blocks=1 00:24:54.713 --rc geninfo_unexecuted_blocks=1 00:24:54.713 00:24:54.713 ' 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:54.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.713 --rc genhtml_branch_coverage=1 00:24:54.713 --rc genhtml_function_coverage=1 00:24:54.713 --rc genhtml_legend=1 00:24:54.713 --rc geninfo_all_blocks=1 00:24:54.713 --rc geninfo_unexecuted_blocks=1 00:24:54.713 00:24:54.713 ' 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:54.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.713 --rc genhtml_branch_coverage=1 00:24:54.713 --rc genhtml_function_coverage=1 00:24:54.713 --rc genhtml_legend=1 00:24:54.713 --rc geninfo_all_blocks=1 00:24:54.713 --rc geninfo_unexecuted_blocks=1 00:24:54.713 00:24:54.713 ' 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:54.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:24:54.713 14:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:59.992 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:59.992 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:24:59.992 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:59.992 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:59.992 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:59.992 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:59.992 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:59.992 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:24:59.992 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:59.992 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:24:59.992 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:24:59.992 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:24:59.992 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:24:59.992 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:24:59.992 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:24:59.992 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:59.992 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:59.992 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:59.992 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:59.992 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:59.992 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:59.992 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:59.992 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:59.992 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:59.992 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:59.992 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:59.992 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:59.992 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:59.992 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:59.992 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:59.992 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:59.992 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:59.992 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:59.992 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:59.992 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:59.993 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:59.993 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:59.993 Found net devices under 0000:31:00.0: cvl_0_0 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:59.993 Found net devices under 0000:31:00.1: cvl_0_1 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:59.993 14:08:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:59.993 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:59.993 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:59.993 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:59.993 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:59.993 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:59.993 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:59.993 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:59.993 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:59.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:59.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.580 ms 00:24:59.993 00:24:59.993 --- 10.0.0.2 ping statistics --- 00:24:59.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.993 rtt min/avg/max/mdev = 0.580/0.580/0.580/0.000 ms 00:24:59.993 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:59.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:59.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:24:59.993 00:24:59.993 --- 10.0.0.1 ping statistics --- 00:24:59.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.993 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:24:59.993 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:59.993 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:24:59.993 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:59.993 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:59.993 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:59.993 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:59.993 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:59.993 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:59.993 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:59.993 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:24:59.993 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:59.993 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:59.993 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:59.993 ************************************ 00:24:59.993 START TEST nvmf_target_disconnect_tc1 00:24:59.993 ************************************ 00:24:59.993 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:24:59.993 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:59.993 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:24:59.993 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:59.993 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:59.993 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:59.993 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:59.993 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:59.993 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:59.993 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:59.993 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:59.993 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:24:59.993 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:59.993 [2024-11-06 14:08:39.243826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.994 [2024-11-06 14:08:39.243874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadfcf0 with addr=10.0.0.2, port=4420 00:24:59.994 [2024-11-06 14:08:39.243892] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:59.994 [2024-11-06 14:08:39.243899] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:59.994 [2024-11-06 14:08:39.243905] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:24:59.994 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:24:59.994 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:24:59.994 Initializing NVMe Controllers 00:24:59.994 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:24:59.994 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:59.994 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:59.994 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:59.994 00:24:59.994 real 0m0.090s 00:24:59.994 user 0m0.046s 00:24:59.994 sys 0m0.043s 00:24:59.994 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:59.994 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:59.994 ************************************ 00:24:59.994 END TEST nvmf_target_disconnect_tc1 00:24:59.994 ************************************ 00:24:59.994 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:24:59.994 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:59.994 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:59.994 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:00.253 ************************************ 00:25:00.253 START TEST nvmf_target_disconnect_tc2 00:25:00.253 ************************************ 00:25:00.253 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:25:00.253 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:25:00.253 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:00.253 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:00.253 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:00.253 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.253 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1045546 00:25:00.253 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1045546 00:25:00.253 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 1045546 ']' 00:25:00.253 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.253 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:00.253 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.253 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:00.253 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.253 14:08:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:00.253 [2024-11-06 14:08:39.337203] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:25:00.253 [2024-11-06 14:08:39.337256] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.253 [2024-11-06 14:08:39.421052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:00.253 [2024-11-06 14:08:39.457522] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.253 [2024-11-06 14:08:39.457551] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.253 [2024-11-06 14:08:39.457559] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:00.253 [2024-11-06 14:08:39.457566] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:00.253 [2024-11-06 14:08:39.457571] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.253 [2024-11-06 14:08:39.459336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:25:00.253 [2024-11-06 14:08:39.459637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:25:00.253 [2024-11-06 14:08:39.459798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:00.253 [2024-11-06 14:08:39.459798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:25:01.192 14:08:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:01.192 14:08:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:25:01.192 14:08:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:01.192 14:08:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:01.192 14:08:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.192 14:08:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:01.192 14:08:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:01.193 14:08:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.193 14:08:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.193 Malloc0 00:25:01.193 14:08:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.193 14:08:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:01.193 14:08:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.193 14:08:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.193 [2024-11-06 14:08:40.171062] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:01.193 14:08:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.193 14:08:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:01.193 14:08:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.193 14:08:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.193 14:08:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.193 14:08:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:01.193 14:08:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.193 14:08:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.193 14:08:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.193 14:08:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:01.193 14:08:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.193 14:08:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.193 [2024-11-06 14:08:40.199330] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:01.193 14:08:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.193 14:08:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:01.193 14:08:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.193 14:08:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.193 14:08:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.193 14:08:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:01.193 14:08:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1045776 00:25:01.193 14:08:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:25:03.110 14:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1045546 00:25:03.110 14:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:25:03.110 Read completed with error (sct=0, sc=8) 00:25:03.110 starting I/O failed 00:25:03.110 Read completed with error (sct=0, sc=8) 00:25:03.110 starting I/O failed 00:25:03.110 Read completed with error (sct=0, sc=8) 00:25:03.110 starting I/O failed 00:25:03.110 Read completed with error (sct=0, sc=8) 00:25:03.110 starting I/O failed 00:25:03.110 Read completed with error (sct=0, sc=8) 00:25:03.110 starting I/O failed 00:25:03.110 Read completed with error (sct=0, sc=8) 00:25:03.110 starting I/O failed 00:25:03.110 Read completed with error (sct=0, sc=8) 00:25:03.110 starting I/O failed 00:25:03.110 Read completed with error (sct=0, sc=8) 00:25:03.110 starting I/O failed 00:25:03.110 Read completed with error (sct=0, sc=8) 00:25:03.110 starting I/O failed 00:25:03.110 Read completed with error (sct=0, sc=8) 00:25:03.110 starting I/O failed 00:25:03.110 Read completed with error (sct=0, sc=8) 00:25:03.110 starting I/O failed 00:25:03.110 Write completed with error (sct=0, sc=8) 00:25:03.110 starting I/O failed 00:25:03.110 Write completed with error (sct=0, sc=8) 00:25:03.110 starting I/O failed 00:25:03.110 Read completed with error (sct=0, sc=8) 00:25:03.110 starting I/O failed 00:25:03.110 Write completed with error (sct=0, sc=8) 00:25:03.110 starting I/O failed 00:25:03.110 Write completed with error (sct=0, sc=8) 00:25:03.110 starting I/O failed 00:25:03.110 Read completed with error (sct=0, sc=8) 00:25:03.110 starting I/O failed 00:25:03.110 Read completed with error (sct=0, sc=8) 00:25:03.110 starting I/O failed 00:25:03.110 Write completed with error (sct=0, sc=8) 00:25:03.110 starting I/O failed 00:25:03.110 Read completed with error (sct=0, sc=8) 00:25:03.110 starting I/O failed 00:25:03.110 Read completed with error (sct=0, sc=8) 00:25:03.110 starting I/O failed 00:25:03.110 Write completed with error (sct=0, sc=8) 00:25:03.110 starting I/O failed 00:25:03.110 Read completed with error (sct=0, sc=8) 00:25:03.110 starting I/O failed 00:25:03.110 Read completed with error (sct=0, sc=8) 00:25:03.110 starting I/O failed 00:25:03.110 Write completed with error (sct=0, sc=8) 00:25:03.110 starting I/O failed 00:25:03.110 Read completed with error (sct=0, sc=8) 00:25:03.110 starting I/O failed 00:25:03.110 Read completed with error (sct=0, sc=8) 00:25:03.110 starting I/O failed 00:25:03.110 Write completed with error (sct=0, sc=8) 00:25:03.110 starting I/O failed 00:25:03.110 Read completed with error (sct=0, sc=8) 00:25:03.110 starting I/O failed 00:25:03.110 Read completed with error (sct=0, sc=8) 00:25:03.110 starting I/O failed 00:25:03.110 Write completed with error (sct=0, sc=8) 00:25:03.110 starting I/O failed 00:25:03.110 Read completed with error (sct=0, sc=8) 00:25:03.110 starting I/O failed 00:25:03.110 [2024-11-06 14:08:42.226384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.110 [2024-11-06 14:08:42.226774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-11-06 14:08:42.226812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-11-06 14:08:42.227121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-11-06 14:08:42.227131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-11-06 14:08:42.227611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-11-06 14:08:42.227640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-11-06 14:08:42.227935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-11-06 14:08:42.227945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-11-06 14:08:42.228236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-11-06 14:08:42.228249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.228632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.228664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.228981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.228991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.229241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.229254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.229565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.229596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.229897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.229906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.230208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.230215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.230521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.230529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.230815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.230823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.231129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.231136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.231490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.231498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.231795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.231802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.232090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.232097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.232418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.232426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.232717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.232724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.232877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.232884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.233177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.233184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.233478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.233486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.233633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.233641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.233967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.233974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.234177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.234184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.234512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.234520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.234783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.234790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.235079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.235086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.235417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.235425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.235580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.235588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.235852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.235859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.236202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.236209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.236497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.236504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.236790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.236797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.237083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.237090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.237414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.237421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.237760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.237767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.238061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.238068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.238367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.238374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.238695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.238702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.238887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.238895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.239160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.239167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-11-06 14:08:42.239484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-11-06 14:08:42.239491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.239830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.239837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.240162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.240169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.240313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.240320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.240639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.240645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.240917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.240924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.241200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.241209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.241508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.241515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.241695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.241703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.241868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.241875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.242251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.242259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.242539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.242546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.242743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.242750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.242987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.242994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.243368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.243376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.243649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.243656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.243951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.243959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.244275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.244283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.244451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.244459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.244706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.244712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.245090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.245097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.245237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.245248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.245544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.245551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.245855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.245861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.246133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.246140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.246484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.246491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.246789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.246796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.247112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.247119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.247413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.247420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.247751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.247758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.247918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.247925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.248125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.248132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.248320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.248327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.248485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.248492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.248771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.248778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.249068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.249075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.249378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.249385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.249707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.249713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-11-06 14:08:42.250030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-11-06 14:08:42.250037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.113 [2024-11-06 14:08:42.250320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-11-06 14:08:42.250327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-11-06 14:08:42.250621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-11-06 14:08:42.250628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-11-06 14:08:42.250944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-11-06 14:08:42.250950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-11-06 14:08:42.251116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-11-06 14:08:42.251124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-11-06 14:08:42.251446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-11-06 14:08:42.251453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-11-06 14:08:42.251770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-11-06 14:08:42.251777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-11-06 14:08:42.252113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-11-06 14:08:42.252119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-11-06 14:08:42.252412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-11-06 14:08:42.252419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-11-06 14:08:42.252695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-11-06 14:08:42.252702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-11-06 14:08:42.253011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-11-06 14:08:42.253018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-11-06 14:08:42.253301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-11-06 14:08:42.253308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-11-06 14:08:42.253597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-11-06 14:08:42.253604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-11-06 14:08:42.253916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-11-06 14:08:42.253922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-11-06 14:08:42.254194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-11-06 14:08:42.254200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-11-06 14:08:42.254484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-11-06 14:08:42.254492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-11-06 14:08:42.254783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-11-06 14:08:42.254789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-11-06 14:08:42.254995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-11-06 14:08:42.255002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-11-06 14:08:42.255316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-11-06 14:08:42.255323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-11-06 14:08:42.255612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-11-06 14:08:42.255619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-11-06 14:08:42.255851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-11-06 14:08:42.255858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-11-06 14:08:42.256164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-11-06 14:08:42.256171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-11-06 14:08:42.256539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-11-06 14:08:42.256546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-11-06 14:08:42.256820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-11-06 14:08:42.256827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-11-06 14:08:42.257127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-11-06 14:08:42.257134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-11-06 14:08:42.257498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-11-06 14:08:42.257506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-11-06 14:08:42.257781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-11-06 14:08:42.257789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-11-06 14:08:42.258078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-11-06 14:08:42.258086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-11-06 14:08:42.258422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-11-06 14:08:42.258429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-11-06 14:08:42.258772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-11-06 14:08:42.258778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-11-06 14:08:42.258943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-11-06 14:08:42.258951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-11-06 14:08:42.259249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-11-06 14:08:42.259256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-11-06 14:08:42.259583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-11-06 14:08:42.259590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-11-06 14:08:42.259870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-11-06 14:08:42.259877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-11-06 14:08:42.260216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-11-06 14:08:42.260223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-11-06 14:08:42.260563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-11-06 14:08:42.260572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-11-06 14:08:42.260763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-11-06 14:08:42.260770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-11-06 14:08:42.261071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-11-06 14:08:42.261078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-11-06 14:08:42.261413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-11-06 14:08:42.261420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-11-06 14:08:42.261742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-11-06 14:08:42.261749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-11-06 14:08:42.262033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-11-06 14:08:42.262040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-11-06 14:08:42.262211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-11-06 14:08:42.262218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-11-06 14:08:42.262603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-11-06 14:08:42.262611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-11-06 14:08:42.262775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-11-06 14:08:42.262782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-11-06 14:08:42.263057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-11-06 14:08:42.263064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-11-06 14:08:42.263351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-11-06 14:08:42.263359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-11-06 14:08:42.263637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-11-06 14:08:42.263644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-11-06 14:08:42.263937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-11-06 14:08:42.263944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-11-06 14:08:42.264146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-11-06 14:08:42.264153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-11-06 14:08:42.264464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-11-06 14:08:42.264471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-11-06 14:08:42.264825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-11-06 14:08:42.264833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-11-06 14:08:42.265018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-11-06 14:08:42.265025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-11-06 14:08:42.265343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-11-06 14:08:42.265351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-11-06 14:08:42.265698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-11-06 14:08:42.265706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-11-06 14:08:42.266011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-11-06 14:08:42.266019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-11-06 14:08:42.266310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-11-06 14:08:42.266318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-11-06 14:08:42.266635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-11-06 14:08:42.266643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-11-06 14:08:42.266917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-11-06 14:08:42.266924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-11-06 14:08:42.267235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-11-06 14:08:42.267242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-11-06 14:08:42.267548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-11-06 14:08:42.267556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-11-06 14:08:42.267858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-11-06 14:08:42.267865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-11-06 14:08:42.268199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-11-06 14:08:42.268207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-11-06 14:08:42.268380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-11-06 14:08:42.268388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-11-06 14:08:42.268722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-11-06 14:08:42.268730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-11-06 14:08:42.269018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-11-06 14:08:42.269026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-11-06 14:08:42.269346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-11-06 14:08:42.269353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-11-06 14:08:42.269709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-11-06 14:08:42.269715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-11-06 14:08:42.270014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-11-06 14:08:42.270020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-11-06 14:08:42.270304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-11-06 14:08:42.270311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-11-06 14:08:42.270619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-11-06 14:08:42.270625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-11-06 14:08:42.270918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-11-06 14:08:42.270924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-11-06 14:08:42.271208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-11-06 14:08:42.271215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-11-06 14:08:42.271517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-11-06 14:08:42.271524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-11-06 14:08:42.271870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-11-06 14:08:42.271877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-11-06 14:08:42.272096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-11-06 14:08:42.272103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-11-06 14:08:42.272452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-11-06 14:08:42.272461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-11-06 14:08:42.272776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-11-06 14:08:42.272783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-11-06 14:08:42.273182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-11-06 14:08:42.273189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-11-06 14:08:42.273346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-11-06 14:08:42.273354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-11-06 14:08:42.273654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-11-06 14:08:42.273661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-11-06 14:08:42.274033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-11-06 14:08:42.274040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-11-06 14:08:42.274237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-11-06 14:08:42.274247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-11-06 14:08:42.274548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-11-06 14:08:42.274555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-11-06 14:08:42.274834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-11-06 14:08:42.274840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-11-06 14:08:42.275140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-11-06 14:08:42.275147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-11-06 14:08:42.275344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-11-06 14:08:42.275351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-11-06 14:08:42.275689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-11-06 14:08:42.275696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-11-06 14:08:42.276012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-11-06 14:08:42.276019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-11-06 14:08:42.276414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-11-06 14:08:42.276422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-11-06 14:08:42.276738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-11-06 14:08:42.276745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-11-06 14:08:42.277079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-11-06 14:08:42.277085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-11-06 14:08:42.277382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-11-06 14:08:42.277390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-11-06 14:08:42.277687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-11-06 14:08:42.277694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-11-06 14:08:42.277834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-11-06 14:08:42.277841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-11-06 14:08:42.278209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-11-06 14:08:42.278215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-11-06 14:08:42.278521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-11-06 14:08:42.278529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-11-06 14:08:42.278832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-11-06 14:08:42.278839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-11-06 14:08:42.279138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-11-06 14:08:42.279145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.279435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-11-06 14:08:42.279442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.279744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-11-06 14:08:42.279751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.280087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-11-06 14:08:42.280093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.280424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-11-06 14:08:42.280431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.280718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-11-06 14:08:42.280725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.281031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-11-06 14:08:42.281038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.281372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-11-06 14:08:42.281379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.281667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-11-06 14:08:42.281674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.281962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-11-06 14:08:42.281969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.282279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-11-06 14:08:42.282287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.282584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-11-06 14:08:42.282590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.282879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-11-06 14:08:42.282885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.283198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-11-06 14:08:42.283205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.283507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-11-06 14:08:42.283514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.283821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-11-06 14:08:42.283828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.284125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-11-06 14:08:42.284131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.284411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-11-06 14:08:42.284418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.284719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-11-06 14:08:42.284727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.284915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-11-06 14:08:42.284922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.285205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-11-06 14:08:42.285211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.285513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-11-06 14:08:42.285520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.285822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-11-06 14:08:42.285829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.286143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-11-06 14:08:42.286150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.286457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-11-06 14:08:42.286464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.286763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-11-06 14:08:42.286770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.287076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-11-06 14:08:42.287083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.287391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-11-06 14:08:42.287398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.287715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-11-06 14:08:42.287722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.288049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-11-06 14:08:42.288057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.288394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-11-06 14:08:42.288401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.288732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-11-06 14:08:42.288739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.288916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-11-06 14:08:42.288923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.289192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-11-06 14:08:42.289199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.289487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-11-06 14:08:42.289495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.289808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-11-06 14:08:42.289815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.290109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-11-06 14:08:42.290116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.290407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-11-06 14:08:42.290414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-11-06 14:08:42.290606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.290612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.117 qpair failed and we were unable to recover it. 00:25:03.117 [2024-11-06 14:08:42.290972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.290979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.117 qpair failed and we were unable to recover it. 00:25:03.117 [2024-11-06 14:08:42.291251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.291258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.117 qpair failed and we were unable to recover it. 00:25:03.117 [2024-11-06 14:08:42.291644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.291651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.117 qpair failed and we were unable to recover it. 00:25:03.117 [2024-11-06 14:08:42.291941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.291948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.117 qpair failed and we were unable to recover it. 00:25:03.117 [2024-11-06 14:08:42.292117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.292124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.117 qpair failed and we were unable to recover it. 00:25:03.117 [2024-11-06 14:08:42.292419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.292426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.117 qpair failed and we were unable to recover it. 00:25:03.117 [2024-11-06 14:08:42.292744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.292751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.117 qpair failed and we were unable to recover it. 00:25:03.117 [2024-11-06 14:08:42.293037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.293045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.117 qpair failed and we were unable to recover it. 00:25:03.117 [2024-11-06 14:08:42.293368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.293375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.117 qpair failed and we were unable to recover it. 00:25:03.117 [2024-11-06 14:08:42.293688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.293694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.117 qpair failed and we were unable to recover it. 00:25:03.117 [2024-11-06 14:08:42.293980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.293987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.117 qpair failed and we were unable to recover it. 00:25:03.117 [2024-11-06 14:08:42.294289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.294296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.117 qpair failed and we were unable to recover it. 00:25:03.117 [2024-11-06 14:08:42.294593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.294600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.117 qpair failed and we were unable to recover it. 00:25:03.117 [2024-11-06 14:08:42.294901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.294909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.117 qpair failed and we were unable to recover it. 00:25:03.117 [2024-11-06 14:08:42.295290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.295297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.117 qpair failed and we were unable to recover it. 00:25:03.117 [2024-11-06 14:08:42.295566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.295573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.117 qpair failed and we were unable to recover it. 00:25:03.117 [2024-11-06 14:08:42.295744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.295751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.117 qpair failed and we were unable to recover it. 00:25:03.117 [2024-11-06 14:08:42.296022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.296028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.117 qpair failed and we were unable to recover it. 00:25:03.117 [2024-11-06 14:08:42.296330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.296337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.117 qpair failed and we were unable to recover it. 00:25:03.117 [2024-11-06 14:08:42.296657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.296665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.117 qpair failed and we were unable to recover it. 00:25:03.117 [2024-11-06 14:08:42.296949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.296956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.117 qpair failed and we were unable to recover it. 00:25:03.117 [2024-11-06 14:08:42.297254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.297261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.117 qpair failed and we were unable to recover it. 00:25:03.117 [2024-11-06 14:08:42.297444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.297450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.117 qpair failed and we were unable to recover it. 00:25:03.117 [2024-11-06 14:08:42.297718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.297725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.117 qpair failed and we were unable to recover it. 00:25:03.117 [2024-11-06 14:08:42.298032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.298039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.117 qpair failed and we were unable to recover it. 00:25:03.117 [2024-11-06 14:08:42.298219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.298227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.117 qpair failed and we were unable to recover it. 00:25:03.117 [2024-11-06 14:08:42.298551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.298558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.117 qpair failed and we were unable to recover it. 00:25:03.117 [2024-11-06 14:08:42.298858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.298865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.117 qpair failed and we were unable to recover it. 00:25:03.117 [2024-11-06 14:08:42.299018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.299026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.117 qpair failed and we were unable to recover it. 00:25:03.117 [2024-11-06 14:08:42.299408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.299415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.117 qpair failed and we were unable to recover it. 00:25:03.117 [2024-11-06 14:08:42.299719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.299727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.117 qpair failed and we were unable to recover it. 00:25:03.117 [2024-11-06 14:08:42.300012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.300020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.117 qpair failed and we were unable to recover it. 00:25:03.117 [2024-11-06 14:08:42.300302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.300310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.117 qpair failed and we were unable to recover it. 00:25:03.117 [2024-11-06 14:08:42.300482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.300489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.117 qpair failed and we were unable to recover it. 00:25:03.117 [2024-11-06 14:08:42.300703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.300710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.117 qpair failed and we were unable to recover it. 00:25:03.117 [2024-11-06 14:08:42.300887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.300895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.117 qpair failed and we were unable to recover it. 00:25:03.117 [2024-11-06 14:08:42.301203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.117 [2024-11-06 14:08:42.301210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.118 [2024-11-06 14:08:42.301521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.118 [2024-11-06 14:08:42.301528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.118 [2024-11-06 14:08:42.301810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.118 [2024-11-06 14:08:42.301818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.118 [2024-11-06 14:08:42.302129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.118 [2024-11-06 14:08:42.302136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.118 [2024-11-06 14:08:42.302448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.118 [2024-11-06 14:08:42.302456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.118 [2024-11-06 14:08:42.302629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.118 [2024-11-06 14:08:42.302637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.118 [2024-11-06 14:08:42.302954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.118 [2024-11-06 14:08:42.302961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.118 [2024-11-06 14:08:42.303257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.118 [2024-11-06 14:08:42.303264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.118 [2024-11-06 14:08:42.303557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.118 [2024-11-06 14:08:42.303564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.118 [2024-11-06 14:08:42.303720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.118 [2024-11-06 14:08:42.303727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.118 [2024-11-06 14:08:42.304060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.118 [2024-11-06 14:08:42.304067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.118 [2024-11-06 14:08:42.304355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.118 [2024-11-06 14:08:42.304363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.118 [2024-11-06 14:08:42.304644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.118 [2024-11-06 14:08:42.304651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.118 [2024-11-06 14:08:42.304971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.118 [2024-11-06 14:08:42.304978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.118 [2024-11-06 14:08:42.305231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.118 [2024-11-06 14:08:42.305238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.118 [2024-11-06 14:08:42.305357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.118 [2024-11-06 14:08:42.305365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.118 [2024-11-06 14:08:42.305549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.118 [2024-11-06 14:08:42.305556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.118 [2024-11-06 14:08:42.305844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.118 [2024-11-06 14:08:42.305851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.118 [2024-11-06 14:08:42.306134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.118 [2024-11-06 14:08:42.306142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.118 [2024-11-06 14:08:42.306432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.118 [2024-11-06 14:08:42.306440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.118 [2024-11-06 14:08:42.306733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.118 [2024-11-06 14:08:42.306740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.118 [2024-11-06 14:08:42.306894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.118 [2024-11-06 14:08:42.306901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.118 [2024-11-06 14:08:42.307114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.118 [2024-11-06 14:08:42.307121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.118 [2024-11-06 14:08:42.307307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.118 [2024-11-06 14:08:42.307316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.118 [2024-11-06 14:08:42.307580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.118 [2024-11-06 14:08:42.307587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.118 [2024-11-06 14:08:42.307883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.118 [2024-11-06 14:08:42.307890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.118 [2024-11-06 14:08:42.308170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.118 [2024-11-06 14:08:42.308177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.118 [2024-11-06 14:08:42.308463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.118 [2024-11-06 14:08:42.308470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.118 [2024-11-06 14:08:42.308802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.118 [2024-11-06 14:08:42.308810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.118 [2024-11-06 14:08:42.309108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.118 [2024-11-06 14:08:42.309115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.118 [2024-11-06 14:08:42.309403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.118 [2024-11-06 14:08:42.309411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.118 [2024-11-06 14:08:42.309707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.118 [2024-11-06 14:08:42.309715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.118 [2024-11-06 14:08:42.310008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.118 [2024-11-06 14:08:42.310015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.118 [2024-11-06 14:08:42.310299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.118 [2024-11-06 14:08:42.310306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.118 [2024-11-06 14:08:42.310609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.118 [2024-11-06 14:08:42.310616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.118 [2024-11-06 14:08:42.310905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.118 [2024-11-06 14:08:42.310912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.118 [2024-11-06 14:08:42.311204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.118 [2024-11-06 14:08:42.311211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.118 [2024-11-06 14:08:42.311516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.118 [2024-11-06 14:08:42.311524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.118 qpair failed and we were unable to recover it. 00:25:03.119 [2024-11-06 14:08:42.311864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.119 [2024-11-06 14:08:42.311871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.119 qpair failed and we were unable to recover it. 00:25:03.119 [2024-11-06 14:08:42.312165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.119 [2024-11-06 14:08:42.312172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.119 qpair failed and we were unable to recover it. 00:25:03.119 [2024-11-06 14:08:42.312457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.119 [2024-11-06 14:08:42.312464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.119 qpair failed and we were unable to recover it. 00:25:03.119 [2024-11-06 14:08:42.312640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.119 [2024-11-06 14:08:42.312648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.119 qpair failed and we were unable to recover it. 00:25:03.119 [2024-11-06 14:08:42.312825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.119 [2024-11-06 14:08:42.312833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.119 qpair failed and we were unable to recover it. 00:25:03.119 [2024-11-06 14:08:42.313139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.119 [2024-11-06 14:08:42.313146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.119 qpair failed and we were unable to recover it. 00:25:03.119 [2024-11-06 14:08:42.313431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.119 [2024-11-06 14:08:42.313438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.119 qpair failed and we were unable to recover it. 00:25:03.119 [2024-11-06 14:08:42.313605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.119 [2024-11-06 14:08:42.313612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.119 qpair failed and we were unable to recover it. 00:25:03.119 [2024-11-06 14:08:42.313913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.119 [2024-11-06 14:08:42.313919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.119 qpair failed and we were unable to recover it. 00:25:03.119 [2024-11-06 14:08:42.314255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.119 [2024-11-06 14:08:42.314262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.119 qpair failed and we were unable to recover it. 00:25:03.119 [2024-11-06 14:08:42.314424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.119 [2024-11-06 14:08:42.314431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.119 qpair failed and we were unable to recover it. 00:25:03.119 [2024-11-06 14:08:42.314763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.119 [2024-11-06 14:08:42.314769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.119 qpair failed and we were unable to recover it. 00:25:03.119 [2024-11-06 14:08:42.315097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.119 [2024-11-06 14:08:42.315104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.119 qpair failed and we were unable to recover it. 00:25:03.119 [2024-11-06 14:08:42.315295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.119 [2024-11-06 14:08:42.315302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.119 qpair failed and we were unable to recover it. 00:25:03.119 [2024-11-06 14:08:42.315606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.119 [2024-11-06 14:08:42.315613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.119 qpair failed and we were unable to recover it. 00:25:03.119 [2024-11-06 14:08:42.315933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.119 [2024-11-06 14:08:42.315939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.119 qpair failed and we were unable to recover it. 00:25:03.119 [2024-11-06 14:08:42.316246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.119 [2024-11-06 14:08:42.316254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.119 qpair failed and we were unable to recover it. 00:25:03.119 [2024-11-06 14:08:42.316454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.119 [2024-11-06 14:08:42.316461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.119 qpair failed and we were unable to recover it. 00:25:03.119 [2024-11-06 14:08:42.316832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.119 [2024-11-06 14:08:42.316840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.119 qpair failed and we were unable to recover it. 00:25:03.119 [2024-11-06 14:08:42.316992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.119 [2024-11-06 14:08:42.316999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.119 qpair failed and we were unable to recover it. 00:25:03.119 [2024-11-06 14:08:42.317298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.119 [2024-11-06 14:08:42.317306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.119 qpair failed and we were unable to recover it. 00:25:03.119 [2024-11-06 14:08:42.317595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.119 [2024-11-06 14:08:42.317601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.119 qpair failed and we were unable to recover it. 00:25:03.119 [2024-11-06 14:08:42.317897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.119 [2024-11-06 14:08:42.317903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.119 qpair failed and we were unable to recover it. 00:25:03.119 [2024-11-06 14:08:42.318080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.119 [2024-11-06 14:08:42.318086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.119 qpair failed and we were unable to recover it. 00:25:03.119 [2024-11-06 14:08:42.318355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.119 [2024-11-06 14:08:42.318363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.119 qpair failed and we were unable to recover it. 00:25:03.119 [2024-11-06 14:08:42.318652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.119 [2024-11-06 14:08:42.318660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.119 qpair failed and we were unable to recover it. 00:25:03.119 [2024-11-06 14:08:42.318983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.119 [2024-11-06 14:08:42.318989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.119 qpair failed and we were unable to recover it. 00:25:03.119 [2024-11-06 14:08:42.319159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.119 [2024-11-06 14:08:42.319165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.119 qpair failed and we were unable to recover it. 00:25:03.119 [2024-11-06 14:08:42.319446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.119 [2024-11-06 14:08:42.319453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.119 qpair failed and we were unable to recover it. 00:25:03.119 [2024-11-06 14:08:42.319760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.119 [2024-11-06 14:08:42.319767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.119 qpair failed and we were unable to recover it. 00:25:03.119 [2024-11-06 14:08:42.320064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.119 [2024-11-06 14:08:42.320071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.119 qpair failed and we were unable to recover it. 00:25:03.119 [2024-11-06 14:08:42.320255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.119 [2024-11-06 14:08:42.320262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.119 qpair failed and we were unable to recover it. 00:25:03.119 [2024-11-06 14:08:42.320536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.119 [2024-11-06 14:08:42.320543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.119 qpair failed and we were unable to recover it. 00:25:03.119 [2024-11-06 14:08:42.320873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.119 [2024-11-06 14:08:42.320879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.119 qpair failed and we were unable to recover it. 00:25:03.119 [2024-11-06 14:08:42.321207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.119 [2024-11-06 14:08:42.321214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.119 qpair failed and we were unable to recover it. 00:25:03.119 [2024-11-06 14:08:42.321397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.119 [2024-11-06 14:08:42.321405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.119 qpair failed and we were unable to recover it. 00:25:03.119 [2024-11-06 14:08:42.321677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.119 [2024-11-06 14:08:42.321684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.119 qpair failed and we were unable to recover it. 00:25:03.120 [2024-11-06 14:08:42.321973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.120 [2024-11-06 14:08:42.321980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.120 qpair failed and we were unable to recover it. 00:25:03.120 [2024-11-06 14:08:42.322273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.120 [2024-11-06 14:08:42.322280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.120 qpair failed and we were unable to recover it. 00:25:03.120 [2024-11-06 14:08:42.322563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.120 [2024-11-06 14:08:42.322569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.120 qpair failed and we were unable to recover it. 00:25:03.120 [2024-11-06 14:08:42.322861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.120 [2024-11-06 14:08:42.322868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.120 qpair failed and we were unable to recover it. 00:25:03.120 [2024-11-06 14:08:42.323210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.120 [2024-11-06 14:08:42.323217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.120 qpair failed and we were unable to recover it. 00:25:03.120 [2024-11-06 14:08:42.323409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.120 [2024-11-06 14:08:42.323416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.120 qpair failed and we were unable to recover it. 00:25:03.120 [2024-11-06 14:08:42.323722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.120 [2024-11-06 14:08:42.323729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.120 qpair failed and we were unable to recover it. 00:25:03.120 [2024-11-06 14:08:42.323928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.120 [2024-11-06 14:08:42.323935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.120 qpair failed and we were unable to recover it. 00:25:03.120 [2024-11-06 14:08:42.324266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.120 [2024-11-06 14:08:42.324273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.120 qpair failed and we were unable to recover it. 00:25:03.120 [2024-11-06 14:08:42.324564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.120 [2024-11-06 14:08:42.324571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.120 qpair failed and we were unable to recover it. 00:25:03.120 [2024-11-06 14:08:42.324886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.120 [2024-11-06 14:08:42.324893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.120 qpair failed and we were unable to recover it. 00:25:03.120 [2024-11-06 14:08:42.325198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.120 [2024-11-06 14:08:42.325205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.120 qpair failed and we were unable to recover it. 00:25:03.120 [2024-11-06 14:08:42.325507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.120 [2024-11-06 14:08:42.325515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.120 qpair failed and we were unable to recover it. 00:25:03.120 [2024-11-06 14:08:42.325795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.120 [2024-11-06 14:08:42.325802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.120 qpair failed and we were unable to recover it. 00:25:03.120 [2024-11-06 14:08:42.326086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.120 [2024-11-06 14:08:42.326093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.120 qpair failed and we were unable to recover it. 00:25:03.120 [2024-11-06 14:08:42.326290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.120 [2024-11-06 14:08:42.326297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.120 qpair failed and we were unable to recover it. 00:25:03.120 [2024-11-06 14:08:42.326618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.120 [2024-11-06 14:08:42.326625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.120 qpair failed and we were unable to recover it. 00:25:03.120 [2024-11-06 14:08:42.326928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.120 [2024-11-06 14:08:42.326934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.120 qpair failed and we were unable to recover it. 00:25:03.120 [2024-11-06 14:08:42.327223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.120 [2024-11-06 14:08:42.327230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.120 qpair failed and we were unable to recover it. 00:25:03.120 [2024-11-06 14:08:42.327538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.120 [2024-11-06 14:08:42.327545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.120 qpair failed and we were unable to recover it. 00:25:03.120 [2024-11-06 14:08:42.327840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.120 [2024-11-06 14:08:42.327846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.120 qpair failed and we were unable to recover it. 00:25:03.120 [2024-11-06 14:08:42.328041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.120 [2024-11-06 14:08:42.328048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.120 qpair failed and we were unable to recover it. 00:25:03.120 [2024-11-06 14:08:42.328359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.120 [2024-11-06 14:08:42.328366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.120 qpair failed and we were unable to recover it. 00:25:03.120 [2024-11-06 14:08:42.328662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.120 [2024-11-06 14:08:42.328668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.120 qpair failed and we were unable to recover it. 00:25:03.120 [2024-11-06 14:08:42.328936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.120 [2024-11-06 14:08:42.328943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.120 qpair failed and we were unable to recover it. 00:25:03.120 [2024-11-06 14:08:42.329152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.120 [2024-11-06 14:08:42.329159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.120 qpair failed and we were unable to recover it. 00:25:03.120 [2024-11-06 14:08:42.329476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.120 [2024-11-06 14:08:42.329483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.120 qpair failed and we were unable to recover it. 00:25:03.120 [2024-11-06 14:08:42.329637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.120 [2024-11-06 14:08:42.329645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.120 qpair failed and we were unable to recover it. 00:25:03.120 [2024-11-06 14:08:42.329814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.120 [2024-11-06 14:08:42.329822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.120 qpair failed and we were unable to recover it. 00:25:03.120 [2024-11-06 14:08:42.330200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.120 [2024-11-06 14:08:42.330207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.120 qpair failed and we were unable to recover it. 00:25:03.120 [2024-11-06 14:08:42.330503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.120 [2024-11-06 14:08:42.330510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.120 qpair failed and we were unable to recover it. 00:25:03.120 [2024-11-06 14:08:42.330797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.120 [2024-11-06 14:08:42.330804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.120 qpair failed and we were unable to recover it. 00:25:03.120 [2024-11-06 14:08:42.331103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.120 [2024-11-06 14:08:42.331110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.120 qpair failed and we were unable to recover it. 00:25:03.120 [2024-11-06 14:08:42.331385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.121 [2024-11-06 14:08:42.331393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.121 qpair failed and we were unable to recover it. 00:25:03.121 [2024-11-06 14:08:42.331750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.121 [2024-11-06 14:08:42.331757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.121 qpair failed and we were unable to recover it. 00:25:03.121 [2024-11-06 14:08:42.332033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.121 [2024-11-06 14:08:42.332040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.121 qpair failed and we were unable to recover it. 00:25:03.121 [2024-11-06 14:08:42.332337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.121 [2024-11-06 14:08:42.332344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.121 qpair failed and we were unable to recover it. 00:25:03.121 [2024-11-06 14:08:42.332708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.121 [2024-11-06 14:08:42.332715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.121 qpair failed and we were unable to recover it. 00:25:03.121 [2024-11-06 14:08:42.333006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.121 [2024-11-06 14:08:42.333013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.121 qpair failed and we were unable to recover it. 00:25:03.121 [2024-11-06 14:08:42.333345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.121 [2024-11-06 14:08:42.333352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.121 qpair failed and we were unable to recover it. 00:25:03.121 [2024-11-06 14:08:42.333690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.121 [2024-11-06 14:08:42.333696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.121 qpair failed and we were unable to recover it. 00:25:03.121 [2024-11-06 14:08:42.333986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.121 [2024-11-06 14:08:42.333992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.121 qpair failed and we were unable to recover it. 00:25:03.121 [2024-11-06 14:08:42.334337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.121 [2024-11-06 14:08:42.334344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.121 qpair failed and we were unable to recover it. 00:25:03.121 [2024-11-06 14:08:42.334675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.121 [2024-11-06 14:08:42.334682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.121 qpair failed and we were unable to recover it. 00:25:03.121 [2024-11-06 14:08:42.334963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.121 [2024-11-06 14:08:42.334969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.121 qpair failed and we were unable to recover it. 00:25:03.121 [2024-11-06 14:08:42.335162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.121 [2024-11-06 14:08:42.335169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.121 qpair failed and we were unable to recover it. 00:25:03.121 [2024-11-06 14:08:42.335406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.121 [2024-11-06 14:08:42.335414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.121 qpair failed and we were unable to recover it. 00:25:03.121 [2024-11-06 14:08:42.335693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.121 [2024-11-06 14:08:42.335700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.121 qpair failed and we were unable to recover it. 00:25:03.121 [2024-11-06 14:08:42.335867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.121 [2024-11-06 14:08:42.335875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.121 qpair failed and we were unable to recover it. 00:25:03.121 [2024-11-06 14:08:42.336203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.121 [2024-11-06 14:08:42.336210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.121 qpair failed and we were unable to recover it. 00:25:03.121 [2024-11-06 14:08:42.336519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.121 [2024-11-06 14:08:42.336526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.121 qpair failed and we were unable to recover it. 00:25:03.121 [2024-11-06 14:08:42.336833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.121 [2024-11-06 14:08:42.336839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.121 qpair failed and we were unable to recover it. 00:25:03.121 [2024-11-06 14:08:42.337134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.121 [2024-11-06 14:08:42.337140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.121 qpair failed and we were unable to recover it. 00:25:03.121 [2024-11-06 14:08:42.337443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.121 [2024-11-06 14:08:42.337451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.121 qpair failed and we were unable to recover it. 00:25:03.121 [2024-11-06 14:08:42.337747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.121 [2024-11-06 14:08:42.337754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.121 qpair failed and we were unable to recover it. 00:25:03.121 [2024-11-06 14:08:42.338057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.121 [2024-11-06 14:08:42.338064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.121 qpair failed and we were unable to recover it. 00:25:03.121 [2024-11-06 14:08:42.338262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.121 [2024-11-06 14:08:42.338269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.121 qpair failed and we were unable to recover it. 00:25:03.121 [2024-11-06 14:08:42.338661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.121 [2024-11-06 14:08:42.338667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.121 qpair failed and we were unable to recover it. 00:25:03.121 [2024-11-06 14:08:42.339006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.121 [2024-11-06 14:08:42.339012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.121 qpair failed and we were unable to recover it. 00:25:03.121 [2024-11-06 14:08:42.339309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.121 [2024-11-06 14:08:42.339316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.121 qpair failed and we were unable to recover it. 00:25:03.121 [2024-11-06 14:08:42.339637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.121 [2024-11-06 14:08:42.339644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.121 qpair failed and we were unable to recover it. 00:25:03.121 [2024-11-06 14:08:42.339930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.121 [2024-11-06 14:08:42.339937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.121 qpair failed and we were unable to recover it. 00:25:03.121 [2024-11-06 14:08:42.340251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.121 [2024-11-06 14:08:42.340259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.121 qpair failed and we were unable to recover it. 00:25:03.121 [2024-11-06 14:08:42.340550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.121 [2024-11-06 14:08:42.340556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.121 qpair failed and we were unable to recover it. 00:25:03.121 [2024-11-06 14:08:42.340737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.121 [2024-11-06 14:08:42.340744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.121 qpair failed and we were unable to recover it. 00:25:03.121 [2024-11-06 14:08:42.341045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.121 [2024-11-06 14:08:42.341051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.121 qpair failed and we were unable to recover it. 00:25:03.121 [2024-11-06 14:08:42.341348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.121 [2024-11-06 14:08:42.341355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.121 qpair failed and we were unable to recover it. 00:25:03.121 [2024-11-06 14:08:42.341507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.121 [2024-11-06 14:08:42.341514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.121 qpair failed and we were unable to recover it. 00:25:03.121 [2024-11-06 14:08:42.341835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.121 [2024-11-06 14:08:42.341843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.121 qpair failed and we were unable to recover it. 00:25:03.121 [2024-11-06 14:08:42.342118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.121 [2024-11-06 14:08:42.342125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.121 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.342407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.122 [2024-11-06 14:08:42.342414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.122 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.342713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.122 [2024-11-06 14:08:42.342719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.122 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.343048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.122 [2024-11-06 14:08:42.343055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.122 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.343387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.122 [2024-11-06 14:08:42.343394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.122 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.343729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.122 [2024-11-06 14:08:42.343736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.122 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.344021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.122 [2024-11-06 14:08:42.344029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.122 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.344322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.122 [2024-11-06 14:08:42.344330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.122 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.344618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.122 [2024-11-06 14:08:42.344625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.122 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.344819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.122 [2024-11-06 14:08:42.344826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.122 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.345142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.122 [2024-11-06 14:08:42.345148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.122 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.345439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.122 [2024-11-06 14:08:42.345446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.122 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.345746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.122 [2024-11-06 14:08:42.345753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.122 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.345952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.122 [2024-11-06 14:08:42.345959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.122 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.346317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.122 [2024-11-06 14:08:42.346324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.122 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.346614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.122 [2024-11-06 14:08:42.346621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.122 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.346911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.122 [2024-11-06 14:08:42.346918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.122 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.347224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.122 [2024-11-06 14:08:42.347231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.122 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.347543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.122 [2024-11-06 14:08:42.347550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.122 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.347839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.122 [2024-11-06 14:08:42.347846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.122 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.348127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.122 [2024-11-06 14:08:42.348134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.122 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.348427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.122 [2024-11-06 14:08:42.348435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.122 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.348756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.122 [2024-11-06 14:08:42.348764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.122 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.349073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.122 [2024-11-06 14:08:42.349080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.122 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.349342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.122 [2024-11-06 14:08:42.349350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.122 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.349630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.122 [2024-11-06 14:08:42.349637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.122 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.349942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.122 [2024-11-06 14:08:42.349949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.122 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.350234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.122 [2024-11-06 14:08:42.350240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.122 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.350392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.122 [2024-11-06 14:08:42.350399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.122 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.350721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.122 [2024-11-06 14:08:42.350728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.122 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.351013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.122 [2024-11-06 14:08:42.351020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.122 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.351309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.122 [2024-11-06 14:08:42.351316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.122 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.351620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.122 [2024-11-06 14:08:42.351627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.122 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.351988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.122 [2024-11-06 14:08:42.351995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.122 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.352311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.122 [2024-11-06 14:08:42.352318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.122 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.352610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.122 [2024-11-06 14:08:42.352616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.122 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.352819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.122 [2024-11-06 14:08:42.352827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.122 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.353152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.122 [2024-11-06 14:08:42.353159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.122 qpair failed and we were unable to recover it. 00:25:03.122 [2024-11-06 14:08:42.353461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.353468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.123 [2024-11-06 14:08:42.353775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.353783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.123 [2024-11-06 14:08:42.354075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.354081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.123 [2024-11-06 14:08:42.354373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.354380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.123 [2024-11-06 14:08:42.354558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.354565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.123 [2024-11-06 14:08:42.354881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.354888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.123 [2024-11-06 14:08:42.355073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.355079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.123 [2024-11-06 14:08:42.355350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.355357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.123 [2024-11-06 14:08:42.355649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.355655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.123 [2024-11-06 14:08:42.355949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.355955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.123 [2024-11-06 14:08:42.356297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.356304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.123 [2024-11-06 14:08:42.356588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.356594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.123 [2024-11-06 14:08:42.356937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.356944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.123 [2024-11-06 14:08:42.357228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.357235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.123 [2024-11-06 14:08:42.357539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.357546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.123 [2024-11-06 14:08:42.357836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.357843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.123 [2024-11-06 14:08:42.358152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.358159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.123 [2024-11-06 14:08:42.358346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.358354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.123 [2024-11-06 14:08:42.358680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.358687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.123 [2024-11-06 14:08:42.358974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.358980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.123 [2024-11-06 14:08:42.359269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.359276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.123 [2024-11-06 14:08:42.359430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.359437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.123 [2024-11-06 14:08:42.359753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.359761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.123 [2024-11-06 14:08:42.360072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.360079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.123 [2024-11-06 14:08:42.360382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.360389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.123 [2024-11-06 14:08:42.360703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.360710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.123 [2024-11-06 14:08:42.360993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.361000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.123 [2024-11-06 14:08:42.361293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.361300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.123 [2024-11-06 14:08:42.361606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.361613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.123 [2024-11-06 14:08:42.361932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.361938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.123 [2024-11-06 14:08:42.362228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.362235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.123 [2024-11-06 14:08:42.362558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.362565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.123 [2024-11-06 14:08:42.362829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.362836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.123 [2024-11-06 14:08:42.363040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.363047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.123 [2024-11-06 14:08:42.363453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.363461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.123 [2024-11-06 14:08:42.363646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.363653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.123 [2024-11-06 14:08:42.363943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.363950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.123 [2024-11-06 14:08:42.364291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.123 [2024-11-06 14:08:42.364299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.123 qpair failed and we were unable to recover it. 00:25:03.124 [2024-11-06 14:08:42.364589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.124 [2024-11-06 14:08:42.364596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.124 qpair failed and we were unable to recover it. 00:25:03.124 [2024-11-06 14:08:42.364882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.124 [2024-11-06 14:08:42.364889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.124 qpair failed and we were unable to recover it. 00:25:03.124 [2024-11-06 14:08:42.365183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.124 [2024-11-06 14:08:42.365190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.124 qpair failed and we were unable to recover it. 00:25:03.124 [2024-11-06 14:08:42.365402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.124 [2024-11-06 14:08:42.365412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.124 qpair failed and we were unable to recover it. 00:25:03.124 [2024-11-06 14:08:42.365720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.124 [2024-11-06 14:08:42.365727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.124 qpair failed and we were unable to recover it. 00:25:03.124 [2024-11-06 14:08:42.366007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.124 [2024-11-06 14:08:42.366015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.124 qpair failed and we were unable to recover it. 00:25:03.124 [2024-11-06 14:08:42.366215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.124 [2024-11-06 14:08:42.366222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.124 qpair failed and we were unable to recover it. 00:25:03.124 [2024-11-06 14:08:42.366495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.124 [2024-11-06 14:08:42.366502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.124 qpair failed and we were unable to recover it. 00:25:03.124 [2024-11-06 14:08:42.366791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.124 [2024-11-06 14:08:42.366798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.124 qpair failed and we were unable to recover it. 00:25:03.124 [2024-11-06 14:08:42.366970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.124 [2024-11-06 14:08:42.366977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.124 qpair failed and we were unable to recover it. 00:25:03.124 [2024-11-06 14:08:42.367170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.124 [2024-11-06 14:08:42.367177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.124 qpair failed and we were unable to recover it. 00:25:03.124 [2024-11-06 14:08:42.367516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.124 [2024-11-06 14:08:42.367524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.124 qpair failed and we were unable to recover it. 00:25:03.124 [2024-11-06 14:08:42.367831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.124 [2024-11-06 14:08:42.367837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.124 qpair failed and we were unable to recover it. 00:25:03.124 [2024-11-06 14:08:42.367994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.124 [2024-11-06 14:08:42.368002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.124 qpair failed and we were unable to recover it. 00:25:03.124 [2024-11-06 14:08:42.368315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.124 [2024-11-06 14:08:42.368322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.124 qpair failed and we were unable to recover it. 00:25:03.124 [2024-11-06 14:08:42.368635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.124 [2024-11-06 14:08:42.368642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.124 qpair failed and we were unable to recover it. 00:25:03.124 [2024-11-06 14:08:42.368948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.124 [2024-11-06 14:08:42.368956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.124 qpair failed and we were unable to recover it. 00:25:03.124 [2024-11-06 14:08:42.369157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.124 [2024-11-06 14:08:42.369165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.124 qpair failed and we were unable to recover it. 00:25:03.124 [2024-11-06 14:08:42.369455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.124 [2024-11-06 14:08:42.369462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.124 qpair failed and we were unable to recover it. 00:25:03.124 [2024-11-06 14:08:42.369758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.124 [2024-11-06 14:08:42.369765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.124 qpair failed and we were unable to recover it. 00:25:03.124 [2024-11-06 14:08:42.370084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.124 [2024-11-06 14:08:42.370091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.124 qpair failed and we were unable to recover it. 00:25:03.124 [2024-11-06 14:08:42.370383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.124 [2024-11-06 14:08:42.370390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.124 qpair failed and we were unable to recover it. 00:25:03.124 [2024-11-06 14:08:42.370686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.124 [2024-11-06 14:08:42.370693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.124 qpair failed and we were unable to recover it. 00:25:03.124 [2024-11-06 14:08:42.371001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.124 [2024-11-06 14:08:42.371008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.124 qpair failed and we were unable to recover it. 00:25:03.124 [2024-11-06 14:08:42.371193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.124 [2024-11-06 14:08:42.371200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.124 qpair failed and we were unable to recover it. 00:25:03.124 [2024-11-06 14:08:42.371510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.124 [2024-11-06 14:08:42.371517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.124 qpair failed and we were unable to recover it. 00:25:03.124 [2024-11-06 14:08:42.371852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.124 [2024-11-06 14:08:42.371859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.124 qpair failed and we were unable to recover it. 00:25:03.124 [2024-11-06 14:08:42.372181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.124 [2024-11-06 14:08:42.372188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.124 qpair failed and we were unable to recover it. 00:25:03.124 [2024-11-06 14:08:42.372511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.124 [2024-11-06 14:08:42.372519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.124 qpair failed and we were unable to recover it. 00:25:03.124 [2024-11-06 14:08:42.372817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.124 [2024-11-06 14:08:42.372824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.124 qpair failed and we were unable to recover it. 00:25:03.124 [2024-11-06 14:08:42.373072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.124 [2024-11-06 14:08:42.373078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.124 qpair failed and we were unable to recover it. 00:25:03.124 [2024-11-06 14:08:42.373395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.124 [2024-11-06 14:08:42.373402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.124 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.373588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.125 [2024-11-06 14:08:42.373595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.125 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.373888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.125 [2024-11-06 14:08:42.373894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.125 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.374188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.125 [2024-11-06 14:08:42.374195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.125 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.374395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.125 [2024-11-06 14:08:42.374403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.125 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.374583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.125 [2024-11-06 14:08:42.374590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.125 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.374898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.125 [2024-11-06 14:08:42.374905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.125 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.375193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.125 [2024-11-06 14:08:42.375200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.125 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.375490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.125 [2024-11-06 14:08:42.375497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.125 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.375825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.125 [2024-11-06 14:08:42.375831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.125 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.376119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.125 [2024-11-06 14:08:42.376125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.125 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.376434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.125 [2024-11-06 14:08:42.376441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.125 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.376748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.125 [2024-11-06 14:08:42.376757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.125 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.377057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.125 [2024-11-06 14:08:42.377064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.125 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.377364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.125 [2024-11-06 14:08:42.377371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.125 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.377668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.125 [2024-11-06 14:08:42.377675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.125 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.377982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.125 [2024-11-06 14:08:42.377989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.125 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.378150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.125 [2024-11-06 14:08:42.378157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.125 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.378486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.125 [2024-11-06 14:08:42.378493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.125 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.378775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.125 [2024-11-06 14:08:42.378781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.125 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.379081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.125 [2024-11-06 14:08:42.379088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.125 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.379492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.125 [2024-11-06 14:08:42.379499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.125 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.379800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.125 [2024-11-06 14:08:42.379807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.125 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.380101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.125 [2024-11-06 14:08:42.380108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.125 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.380396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.125 [2024-11-06 14:08:42.380403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.125 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.380734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.125 [2024-11-06 14:08:42.380741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.125 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.381039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.125 [2024-11-06 14:08:42.381045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.125 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.381203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.125 [2024-11-06 14:08:42.381210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.125 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.381534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.125 [2024-11-06 14:08:42.381541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.125 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.381876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.125 [2024-11-06 14:08:42.381882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.125 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.382199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.125 [2024-11-06 14:08:42.382206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.125 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.382538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.125 [2024-11-06 14:08:42.382545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.125 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.382840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.125 [2024-11-06 14:08:42.382847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.125 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.383145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.125 [2024-11-06 14:08:42.383152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.125 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.383454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.125 [2024-11-06 14:08:42.383461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.125 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.383769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.125 [2024-11-06 14:08:42.383776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.125 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.384078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.125 [2024-11-06 14:08:42.384085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.125 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.384463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.125 [2024-11-06 14:08:42.384470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.125 qpair failed and we were unable to recover it. 00:25:03.125 [2024-11-06 14:08:42.384819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.126 [2024-11-06 14:08:42.384826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.126 qpair failed and we were unable to recover it. 00:25:03.126 [2024-11-06 14:08:42.385125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.126 [2024-11-06 14:08:42.385132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.126 qpair failed and we were unable to recover it. 00:25:03.126 [2024-11-06 14:08:42.385464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.126 [2024-11-06 14:08:42.385471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.126 qpair failed and we were unable to recover it. 00:25:03.126 [2024-11-06 14:08:42.385763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.126 [2024-11-06 14:08:42.385770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.126 qpair failed and we were unable to recover it. 00:25:03.126 [2024-11-06 14:08:42.385948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.126 [2024-11-06 14:08:42.385955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.126 qpair failed and we were unable to recover it. 00:25:03.126 [2024-11-06 14:08:42.386275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.126 [2024-11-06 14:08:42.386282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.126 qpair failed and we were unable to recover it. 00:25:03.404 [2024-11-06 14:08:42.386563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-11-06 14:08:42.386571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-11-06 14:08:42.386861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-11-06 14:08:42.386868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-11-06 14:08:42.387158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-11-06 14:08:42.387164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-11-06 14:08:42.387329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-11-06 14:08:42.387337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-11-06 14:08:42.387616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-11-06 14:08:42.387623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-11-06 14:08:42.387964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-11-06 14:08:42.387970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-11-06 14:08:42.388286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-11-06 14:08:42.388293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-11-06 14:08:42.388468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-11-06 14:08:42.388475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-11-06 14:08:42.388812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-11-06 14:08:42.388821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-11-06 14:08:42.389092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-11-06 14:08:42.389098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-11-06 14:08:42.389394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-11-06 14:08:42.389401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-11-06 14:08:42.389714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-11-06 14:08:42.389721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-11-06 14:08:42.390013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-11-06 14:08:42.390019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-11-06 14:08:42.390310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-11-06 14:08:42.390317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-11-06 14:08:42.390625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-11-06 14:08:42.390631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-11-06 14:08:42.390797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-11-06 14:08:42.390804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-11-06 14:08:42.391100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-11-06 14:08:42.391107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-11-06 14:08:42.391418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-11-06 14:08:42.391426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-11-06 14:08:42.391736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-11-06 14:08:42.391743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-11-06 14:08:42.392075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-11-06 14:08:42.392082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-11-06 14:08:42.392237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-11-06 14:08:42.392247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-11-06 14:08:42.392319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-11-06 14:08:42.392326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-11-06 14:08:42.392669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-11-06 14:08:42.392676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-11-06 14:08:42.392885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-11-06 14:08:42.392893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-11-06 14:08:42.393175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-11-06 14:08:42.393182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-11-06 14:08:42.393386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-11-06 14:08:42.393393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-11-06 14:08:42.393688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-11-06 14:08:42.393694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-11-06 14:08:42.394015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-11-06 14:08:42.394021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-11-06 14:08:42.394333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-11-06 14:08:42.394341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-11-06 14:08:42.394690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-11-06 14:08:42.394696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-11-06 14:08:42.394988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-11-06 14:08:42.394995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-11-06 14:08:42.395260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-11-06 14:08:42.395267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-11-06 14:08:42.395601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-11-06 14:08:42.395608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-11-06 14:08:42.395918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-11-06 14:08:42.395925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-11-06 14:08:42.396287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-11-06 14:08:42.396294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-11-06 14:08:42.396588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-11-06 14:08:42.396595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-11-06 14:08:42.396876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-11-06 14:08:42.396883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-11-06 14:08:42.397189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-11-06 14:08:42.397197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-11-06 14:08:42.397488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-11-06 14:08:42.397496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-11-06 14:08:42.397690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-11-06 14:08:42.397698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-11-06 14:08:42.398041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-11-06 14:08:42.398048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-11-06 14:08:42.398246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-11-06 14:08:42.398253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-11-06 14:08:42.398537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-11-06 14:08:42.398543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-11-06 14:08:42.398840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-11-06 14:08:42.398847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-11-06 14:08:42.399004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-11-06 14:08:42.399012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-11-06 14:08:42.399285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-11-06 14:08:42.399292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-11-06 14:08:42.399603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-11-06 14:08:42.399610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-11-06 14:08:42.399950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-11-06 14:08:42.399957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-11-06 14:08:42.400270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-11-06 14:08:42.400277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-11-06 14:08:42.400610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-11-06 14:08:42.400617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-11-06 14:08:42.400945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-11-06 14:08:42.400952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-11-06 14:08:42.401256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-11-06 14:08:42.401264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-11-06 14:08:42.401573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-11-06 14:08:42.401580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-11-06 14:08:42.401737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-11-06 14:08:42.401744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-11-06 14:08:42.402049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-11-06 14:08:42.402056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-11-06 14:08:42.402366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-11-06 14:08:42.402373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-11-06 14:08:42.402686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-11-06 14:08:42.402693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-11-06 14:08:42.402957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-11-06 14:08:42.402964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-11-06 14:08:42.403156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-11-06 14:08:42.403163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-11-06 14:08:42.403496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-11-06 14:08:42.403503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-11-06 14:08:42.403823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-11-06 14:08:42.403829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-11-06 14:08:42.404124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-11-06 14:08:42.404131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-11-06 14:08:42.404429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-11-06 14:08:42.404436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-11-06 14:08:42.404750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-11-06 14:08:42.404757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-11-06 14:08:42.405048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-11-06 14:08:42.405056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-11-06 14:08:42.405346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-11-06 14:08:42.405353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-11-06 14:08:42.405552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-11-06 14:08:42.405559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-11-06 14:08:42.405853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-11-06 14:08:42.405859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-11-06 14:08:42.406173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-11-06 14:08:42.406179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-11-06 14:08:42.406472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-11-06 14:08:42.406478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-11-06 14:08:42.406769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-11-06 14:08:42.406775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-11-06 14:08:42.407087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-11-06 14:08:42.407093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-11-06 14:08:42.407424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-11-06 14:08:42.407431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-11-06 14:08:42.407722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-11-06 14:08:42.407729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-11-06 14:08:42.407909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-11-06 14:08:42.407915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-11-06 14:08:42.408227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-11-06 14:08:42.408236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-11-06 14:08:42.408545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-11-06 14:08:42.408553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-11-06 14:08:42.408832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-11-06 14:08:42.408840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-11-06 14:08:42.409006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-11-06 14:08:42.409014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-11-06 14:08:42.409273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-11-06 14:08:42.409281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-11-06 14:08:42.409583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-11-06 14:08:42.409590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-11-06 14:08:42.409888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-11-06 14:08:42.409894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-11-06 14:08:42.410214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-11-06 14:08:42.410221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-11-06 14:08:42.410530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-11-06 14:08:42.410537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-11-06 14:08:42.410689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-11-06 14:08:42.410696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-11-06 14:08:42.410965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-11-06 14:08:42.410971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-11-06 14:08:42.411292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-11-06 14:08:42.411299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-11-06 14:08:42.411593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-11-06 14:08:42.411599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-11-06 14:08:42.411883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-11-06 14:08:42.411889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-11-06 14:08:42.412224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-11-06 14:08:42.412231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-11-06 14:08:42.412523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-11-06 14:08:42.412530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-11-06 14:08:42.412843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-11-06 14:08:42.412850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-11-06 14:08:42.413023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-11-06 14:08:42.413030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-11-06 14:08:42.413331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-11-06 14:08:42.413338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-11-06 14:08:42.413651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-11-06 14:08:42.413657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-11-06 14:08:42.413965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-11-06 14:08:42.413972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-11-06 14:08:42.414267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-11-06 14:08:42.414274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-11-06 14:08:42.414569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-11-06 14:08:42.414576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-11-06 14:08:42.414890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-11-06 14:08:42.414896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-11-06 14:08:42.415233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-11-06 14:08:42.415240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-11-06 14:08:42.415423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-11-06 14:08:42.415431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-11-06 14:08:42.415730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-11-06 14:08:42.415737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-11-06 14:08:42.416045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-11-06 14:08:42.416052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-11-06 14:08:42.416339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-11-06 14:08:42.416346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-11-06 14:08:42.416516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-11-06 14:08:42.416523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-11-06 14:08:42.416817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-11-06 14:08:42.416823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-11-06 14:08:42.417086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-11-06 14:08:42.417093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-11-06 14:08:42.417433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-11-06 14:08:42.417440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-11-06 14:08:42.417768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-11-06 14:08:42.417775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-11-06 14:08:42.418061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-11-06 14:08:42.418068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-11-06 14:08:42.418367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-11-06 14:08:42.418374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-11-06 14:08:42.418683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-11-06 14:08:42.418690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-11-06 14:08:42.419007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-11-06 14:08:42.419014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-11-06 14:08:42.419314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-11-06 14:08:42.419320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-11-06 14:08:42.419617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-11-06 14:08:42.419623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-11-06 14:08:42.419973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-11-06 14:08:42.419982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-11-06 14:08:42.420270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-11-06 14:08:42.420277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-11-06 14:08:42.420670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-11-06 14:08:42.420677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-11-06 14:08:42.420834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-11-06 14:08:42.420842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-11-06 14:08:42.421136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-11-06 14:08:42.421143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-11-06 14:08:42.421442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-11-06 14:08:42.421449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-11-06 14:08:42.421652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-11-06 14:08:42.421659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-11-06 14:08:42.421966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-11-06 14:08:42.421973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-11-06 14:08:42.422342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-11-06 14:08:42.422349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-11-06 14:08:42.422521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-11-06 14:08:42.422528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-11-06 14:08:42.422825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-11-06 14:08:42.422831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-11-06 14:08:42.423116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-11-06 14:08:42.423122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-11-06 14:08:42.423436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-11-06 14:08:42.423443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-11-06 14:08:42.423669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-11-06 14:08:42.423675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-11-06 14:08:42.423938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-11-06 14:08:42.423945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-11-06 14:08:42.424264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-11-06 14:08:42.424271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-11-06 14:08:42.424561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-11-06 14:08:42.424568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-11-06 14:08:42.424862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-11-06 14:08:42.424869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-11-06 14:08:42.425173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-11-06 14:08:42.425180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-11-06 14:08:42.425349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-11-06 14:08:42.425357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-11-06 14:08:42.425523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-11-06 14:08:42.425530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-11-06 14:08:42.425709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-11-06 14:08:42.425716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-11-06 14:08:42.426052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-11-06 14:08:42.426059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-11-06 14:08:42.426350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-11-06 14:08:42.426357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-11-06 14:08:42.426680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-11-06 14:08:42.426687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-11-06 14:08:42.426977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-11-06 14:08:42.426984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-11-06 14:08:42.427306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-11-06 14:08:42.427313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-11-06 14:08:42.427636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-11-06 14:08:42.427643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-11-06 14:08:42.427935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-11-06 14:08:42.427942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-11-06 14:08:42.428232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-11-06 14:08:42.428239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-11-06 14:08:42.428542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-11-06 14:08:42.428549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-11-06 14:08:42.428761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-11-06 14:08:42.428768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-11-06 14:08:42.429088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-11-06 14:08:42.429095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-11-06 14:08:42.429391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-11-06 14:08:42.429398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-11-06 14:08:42.429717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-11-06 14:08:42.429723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-11-06 14:08:42.430008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-11-06 14:08:42.430016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-11-06 14:08:42.430309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-11-06 14:08:42.430315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-11-06 14:08:42.430626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-11-06 14:08:42.430633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-11-06 14:08:42.430922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-11-06 14:08:42.430930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-11-06 14:08:42.431249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-11-06 14:08:42.431256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.409 [2024-11-06 14:08:42.431553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-11-06 14:08:42.431561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-11-06 14:08:42.431930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-11-06 14:08:42.431937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-11-06 14:08:42.432231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-11-06 14:08:42.432238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-11-06 14:08:42.432534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-11-06 14:08:42.432541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-11-06 14:08:42.432857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-11-06 14:08:42.432863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-11-06 14:08:42.433056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-11-06 14:08:42.433063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-11-06 14:08:42.433364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-11-06 14:08:42.433371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-11-06 14:08:42.433664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-11-06 14:08:42.433671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-11-06 14:08:42.433884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-11-06 14:08:42.433891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-11-06 14:08:42.434210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-11-06 14:08:42.434217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-11-06 14:08:42.434513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-11-06 14:08:42.434520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-11-06 14:08:42.434607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-11-06 14:08:42.434613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-11-06 14:08:42.434895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-11-06 14:08:42.434902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-11-06 14:08:42.435080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-11-06 14:08:42.435087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-11-06 14:08:42.435278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-11-06 14:08:42.435285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-11-06 14:08:42.435537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-11-06 14:08:42.435544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-11-06 14:08:42.435851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-11-06 14:08:42.435857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-11-06 14:08:42.436176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-11-06 14:08:42.436183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-11-06 14:08:42.436489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-11-06 14:08:42.436497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-11-06 14:08:42.436801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-11-06 14:08:42.436808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-11-06 14:08:42.436978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-11-06 14:08:42.436985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-11-06 14:08:42.437256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-11-06 14:08:42.437264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-11-06 14:08:42.437637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-11-06 14:08:42.437644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-11-06 14:08:42.437929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-11-06 14:08:42.437936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-11-06 14:08:42.438302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-11-06 14:08:42.438309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-11-06 14:08:42.438500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-11-06 14:08:42.438506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-11-06 14:08:42.438784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-11-06 14:08:42.438790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-11-06 14:08:42.439097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-11-06 14:08:42.439104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-11-06 14:08:42.439399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-11-06 14:08:42.439406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-11-06 14:08:42.439706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-11-06 14:08:42.439713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-11-06 14:08:42.439973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-11-06 14:08:42.439980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.410 [2024-11-06 14:08:42.440269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-11-06 14:08:42.440276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-11-06 14:08:42.440640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-11-06 14:08:42.440647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-11-06 14:08:42.440930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-11-06 14:08:42.440937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-11-06 14:08:42.441226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-11-06 14:08:42.441232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-11-06 14:08:42.441540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-11-06 14:08:42.441547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-11-06 14:08:42.441855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-11-06 14:08:42.441861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-11-06 14:08:42.442152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-11-06 14:08:42.442159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-11-06 14:08:42.442458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-11-06 14:08:42.442465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-11-06 14:08:42.442617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-11-06 14:08:42.442624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-11-06 14:08:42.442927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-11-06 14:08:42.442937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-11-06 14:08:42.443110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-11-06 14:08:42.443118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-11-06 14:08:42.443409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-11-06 14:08:42.443417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-11-06 14:08:42.443722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-11-06 14:08:42.443729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-11-06 14:08:42.443914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-11-06 14:08:42.443921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-11-06 14:08:42.444194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-11-06 14:08:42.444201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-11-06 14:08:42.444509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-11-06 14:08:42.444516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-11-06 14:08:42.444823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-11-06 14:08:42.444830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-11-06 14:08:42.445023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-11-06 14:08:42.445030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-11-06 14:08:42.445330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-11-06 14:08:42.445337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-11-06 14:08:42.445634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-11-06 14:08:42.445641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-11-06 14:08:42.445950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-11-06 14:08:42.445957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-11-06 14:08:42.446301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-11-06 14:08:42.446308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-11-06 14:08:42.446608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-11-06 14:08:42.446615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-11-06 14:08:42.446899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-11-06 14:08:42.446906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-11-06 14:08:42.447078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-11-06 14:08:42.447086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-11-06 14:08:42.447298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-11-06 14:08:42.447305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-11-06 14:08:42.447610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-11-06 14:08:42.447617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-11-06 14:08:42.447796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-11-06 14:08:42.447803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-11-06 14:08:42.448116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-11-06 14:08:42.448123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-11-06 14:08:42.448296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-11-06 14:08:42.448304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-11-06 14:08:42.448605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-11-06 14:08:42.448612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-11-06 14:08:42.448894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-11-06 14:08:42.448900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-11-06 14:08:42.449196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-11-06 14:08:42.449203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-11-06 14:08:42.449523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-11-06 14:08:42.449530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.411 [2024-11-06 14:08:42.449670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-11-06 14:08:42.449678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-11-06 14:08:42.450020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-11-06 14:08:42.450027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-11-06 14:08:42.450370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-11-06 14:08:42.450377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-11-06 14:08:42.450668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-11-06 14:08:42.450674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-11-06 14:08:42.450989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-11-06 14:08:42.450995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-11-06 14:08:42.451295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-11-06 14:08:42.451302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-11-06 14:08:42.451498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-11-06 14:08:42.451505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-11-06 14:08:42.451859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-11-06 14:08:42.451866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-11-06 14:08:42.452077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-11-06 14:08:42.452084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-11-06 14:08:42.452391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-11-06 14:08:42.452399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-11-06 14:08:42.452719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-11-06 14:08:42.452726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-11-06 14:08:42.453031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-11-06 14:08:42.453037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-11-06 14:08:42.453213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-11-06 14:08:42.453220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-11-06 14:08:42.453534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-11-06 14:08:42.453541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-11-06 14:08:42.453869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-11-06 14:08:42.453876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-11-06 14:08:42.454192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-11-06 14:08:42.454201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-11-06 14:08:42.454517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-11-06 14:08:42.454524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-11-06 14:08:42.454831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-11-06 14:08:42.454838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-11-06 14:08:42.455150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-11-06 14:08:42.455157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-11-06 14:08:42.455442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-11-06 14:08:42.455448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-11-06 14:08:42.455744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-11-06 14:08:42.455751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-11-06 14:08:42.456045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-11-06 14:08:42.456052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-11-06 14:08:42.456245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-11-06 14:08:42.456252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-11-06 14:08:42.456565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-11-06 14:08:42.456571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-11-06 14:08:42.456755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-11-06 14:08:42.456761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-11-06 14:08:42.457063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-11-06 14:08:42.457069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-11-06 14:08:42.457216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-11-06 14:08:42.457223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-11-06 14:08:42.457413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-11-06 14:08:42.457421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-11-06 14:08:42.457696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-11-06 14:08:42.457703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-11-06 14:08:42.458010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-11-06 14:08:42.458018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-11-06 14:08:42.458182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-11-06 14:08:42.458188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-11-06 14:08:42.458541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-11-06 14:08:42.458548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-11-06 14:08:42.458834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-11-06 14:08:42.458841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-11-06 14:08:42.459222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-11-06 14:08:42.459229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-11-06 14:08:42.459420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-11-06 14:08:42.459427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.412 [2024-11-06 14:08:42.459705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-11-06 14:08:42.459712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-11-06 14:08:42.460061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-11-06 14:08:42.460068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-11-06 14:08:42.460266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-11-06 14:08:42.460273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-11-06 14:08:42.460501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-11-06 14:08:42.460508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-11-06 14:08:42.460687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-11-06 14:08:42.460694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-11-06 14:08:42.460991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-11-06 14:08:42.460997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-11-06 14:08:42.461291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-11-06 14:08:42.461298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-11-06 14:08:42.461647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-11-06 14:08:42.461654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-11-06 14:08:42.461968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-11-06 14:08:42.461975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-11-06 14:08:42.462292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-11-06 14:08:42.462300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-11-06 14:08:42.462620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-11-06 14:08:42.462627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-11-06 14:08:42.462962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-11-06 14:08:42.462969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-11-06 14:08:42.463271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-11-06 14:08:42.463278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-11-06 14:08:42.463583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-11-06 14:08:42.463589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-11-06 14:08:42.463900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-11-06 14:08:42.463907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-11-06 14:08:42.464258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-11-06 14:08:42.464265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-11-06 14:08:42.464557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-11-06 14:08:42.464564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-11-06 14:08:42.464779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-11-06 14:08:42.464785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-11-06 14:08:42.465109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-11-06 14:08:42.465116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-11-06 14:08:42.465416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-11-06 14:08:42.465423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-11-06 14:08:42.465735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-11-06 14:08:42.465744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-11-06 14:08:42.466059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-11-06 14:08:42.466066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-11-06 14:08:42.466380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-11-06 14:08:42.466387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-11-06 14:08:42.466701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-11-06 14:08:42.466707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-11-06 14:08:42.467041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-11-06 14:08:42.467048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-11-06 14:08:42.467371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-11-06 14:08:42.467378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-11-06 14:08:42.467685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-11-06 14:08:42.467691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-11-06 14:08:42.467976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-11-06 14:08:42.467984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-11-06 14:08:42.468164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-11-06 14:08:42.468172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-11-06 14:08:42.468481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-11-06 14:08:42.468489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-11-06 14:08:42.468728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-11-06 14:08:42.468736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-11-06 14:08:42.469091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-11-06 14:08:42.469098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-11-06 14:08:42.469388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-11-06 14:08:42.469395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-11-06 14:08:42.469587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-11-06 14:08:42.469594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-11-06 14:08:42.469905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-11-06 14:08:42.469911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-11-06 14:08:42.470225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.470232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-11-06 14:08:42.470541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.470548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-11-06 14:08:42.470844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.470850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-11-06 14:08:42.471030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.471037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-11-06 14:08:42.471302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.471309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-11-06 14:08:42.471596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.471603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-11-06 14:08:42.471894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.471900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-11-06 14:08:42.472225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.472231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-11-06 14:08:42.472560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.472567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-11-06 14:08:42.472870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.472877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-11-06 14:08:42.473160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.473167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-11-06 14:08:42.473472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.473479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-11-06 14:08:42.473789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.473796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-11-06 14:08:42.474121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.474128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-11-06 14:08:42.474397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.474404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-11-06 14:08:42.474576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.474584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-11-06 14:08:42.474879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.474886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-11-06 14:08:42.475048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.475055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-11-06 14:08:42.475336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.475343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-11-06 14:08:42.475496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.475504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-11-06 14:08:42.475806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.475813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-11-06 14:08:42.475981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.475988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-11-06 14:08:42.476347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.476354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-11-06 14:08:42.476665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.476671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-11-06 14:08:42.476866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.476873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-11-06 14:08:42.477190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.477198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-11-06 14:08:42.477492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.477499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-11-06 14:08:42.477847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.477854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-11-06 14:08:42.478140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.478147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-11-06 14:08:42.478437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.478445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-11-06 14:08:42.478745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.478752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-11-06 14:08:42.478932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.478939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-11-06 14:08:42.479152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.479159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-11-06 14:08:42.479449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.479456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-11-06 14:08:42.479773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.479781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-11-06 14:08:42.480156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.480162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-11-06 14:08:42.480498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.480506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-11-06 14:08:42.480796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-11-06 14:08:42.480803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.414 [2024-11-06 14:08:42.481018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-11-06 14:08:42.481026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-11-06 14:08:42.481334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-11-06 14:08:42.481342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-11-06 14:08:42.481739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-11-06 14:08:42.481746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-11-06 14:08:42.482044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-11-06 14:08:42.482051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-11-06 14:08:42.482259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-11-06 14:08:42.482266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-11-06 14:08:42.482591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-11-06 14:08:42.482598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-11-06 14:08:42.482921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-11-06 14:08:42.482928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-11-06 14:08:42.483219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-11-06 14:08:42.483226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-11-06 14:08:42.483533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-11-06 14:08:42.483541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-11-06 14:08:42.483844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-11-06 14:08:42.483851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-11-06 14:08:42.484142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-11-06 14:08:42.484148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-11-06 14:08:42.484469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-11-06 14:08:42.484476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-11-06 14:08:42.484782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-11-06 14:08:42.484789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-11-06 14:08:42.485088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-11-06 14:08:42.485095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-11-06 14:08:42.485389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-11-06 14:08:42.485397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-11-06 14:08:42.485561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-11-06 14:08:42.485569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-11-06 14:08:42.485918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-11-06 14:08:42.485925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-11-06 14:08:42.486088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-11-06 14:08:42.486096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-11-06 14:08:42.486450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-11-06 14:08:42.486457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-11-06 14:08:42.486741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-11-06 14:08:42.486748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-11-06 14:08:42.487113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-11-06 14:08:42.487120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-11-06 14:08:42.487322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-11-06 14:08:42.487330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-11-06 14:08:42.487620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-11-06 14:08:42.487627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-11-06 14:08:42.487880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-11-06 14:08:42.487887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-11-06 14:08:42.488176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-11-06 14:08:42.488184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-11-06 14:08:42.488477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-11-06 14:08:42.488485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-11-06 14:08:42.488812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-11-06 14:08:42.488819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-11-06 14:08:42.488987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-11-06 14:08:42.488997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-11-06 14:08:42.489313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-11-06 14:08:42.489320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-11-06 14:08:42.489630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-11-06 14:08:42.489637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-11-06 14:08:42.489936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-11-06 14:08:42.489943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.415 [2024-11-06 14:08:42.490277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-11-06 14:08:42.490284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-11-06 14:08:42.490496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-11-06 14:08:42.490504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-11-06 14:08:42.490679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-11-06 14:08:42.490686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-11-06 14:08:42.490905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-11-06 14:08:42.490912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-11-06 14:08:42.491218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-11-06 14:08:42.491225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-11-06 14:08:42.491522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-11-06 14:08:42.491529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-11-06 14:08:42.491819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-11-06 14:08:42.491826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-11-06 14:08:42.492128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-11-06 14:08:42.492136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-11-06 14:08:42.492421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-11-06 14:08:42.492429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-11-06 14:08:42.492768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-11-06 14:08:42.492775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-11-06 14:08:42.493108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-11-06 14:08:42.493115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-11-06 14:08:42.493399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-11-06 14:08:42.493406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-11-06 14:08:42.493719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-11-06 14:08:42.493725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-11-06 14:08:42.494040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-11-06 14:08:42.494047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-11-06 14:08:42.494223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-11-06 14:08:42.494230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-11-06 14:08:42.494587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-11-06 14:08:42.494595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-11-06 14:08:42.494900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-11-06 14:08:42.494908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-11-06 14:08:42.495235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-11-06 14:08:42.495243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-11-06 14:08:42.495547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-11-06 14:08:42.495554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-11-06 14:08:42.495843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-11-06 14:08:42.495850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-11-06 14:08:42.496213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-11-06 14:08:42.496220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-11-06 14:08:42.496564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-11-06 14:08:42.496571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-11-06 14:08:42.496746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-11-06 14:08:42.496753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-11-06 14:08:42.497082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-11-06 14:08:42.497089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-11-06 14:08:42.497383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-11-06 14:08:42.497390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-11-06 14:08:42.497682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-11-06 14:08:42.497689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-11-06 14:08:42.497999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-11-06 14:08:42.498006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-11-06 14:08:42.498328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-11-06 14:08:42.498336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-11-06 14:08:42.498526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-11-06 14:08:42.498534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-11-06 14:08:42.498813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-11-06 14:08:42.498820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-11-06 14:08:42.499132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-11-06 14:08:42.499138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-11-06 14:08:42.499501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-11-06 14:08:42.499509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-11-06 14:08:42.499855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-11-06 14:08:42.499862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-11-06 14:08:42.500165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-11-06 14:08:42.500172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-11-06 14:08:42.500477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-11-06 14:08:42.500485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-11-06 14:08:42.500837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-11-06 14:08:42.500845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-11-06 14:08:42.501123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-11-06 14:08:42.501132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-11-06 14:08:42.501407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-11-06 14:08:42.501415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-11-06 14:08:42.501722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-11-06 14:08:42.501729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-11-06 14:08:42.502007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-11-06 14:08:42.502014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-11-06 14:08:42.502318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-11-06 14:08:42.502326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-11-06 14:08:42.502640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-11-06 14:08:42.502647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-11-06 14:08:42.502990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-11-06 14:08:42.502997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-11-06 14:08:42.503145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-11-06 14:08:42.503153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-11-06 14:08:42.503438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-11-06 14:08:42.503445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-11-06 14:08:42.503780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-11-06 14:08:42.503787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-11-06 14:08:42.504064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-11-06 14:08:42.504072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-11-06 14:08:42.504406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-11-06 14:08:42.504414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-11-06 14:08:42.504722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-11-06 14:08:42.504729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-11-06 14:08:42.505040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-11-06 14:08:42.505047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-11-06 14:08:42.505339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-11-06 14:08:42.505347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-11-06 14:08:42.505639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-11-06 14:08:42.505646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-11-06 14:08:42.505942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-11-06 14:08:42.505948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-11-06 14:08:42.506240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-11-06 14:08:42.506255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-11-06 14:08:42.506595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-11-06 14:08:42.506603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-11-06 14:08:42.506887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-11-06 14:08:42.506893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-11-06 14:08:42.507177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-11-06 14:08:42.507185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-11-06 14:08:42.507533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-11-06 14:08:42.507541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-11-06 14:08:42.507685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-11-06 14:08:42.507693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-11-06 14:08:42.507972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-11-06 14:08:42.507980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-11-06 14:08:42.508237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-11-06 14:08:42.508249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-11-06 14:08:42.508421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-11-06 14:08:42.508428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-11-06 14:08:42.508697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-11-06 14:08:42.508705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-11-06 14:08:42.508990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-11-06 14:08:42.508998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-11-06 14:08:42.509288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-11-06 14:08:42.509296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-11-06 14:08:42.509580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-11-06 14:08:42.509587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-11-06 14:08:42.509950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-11-06 14:08:42.509957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-11-06 14:08:42.510283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-11-06 14:08:42.510291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-11-06 14:08:42.510621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-11-06 14:08:42.510628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-11-06 14:08:42.510910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-11-06 14:08:42.510917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-11-06 14:08:42.511230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-11-06 14:08:42.511237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-11-06 14:08:42.511555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-11-06 14:08:42.511562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-11-06 14:08:42.511875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-11-06 14:08:42.511882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-11-06 14:08:42.512170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-11-06 14:08:42.512177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-11-06 14:08:42.512547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-11-06 14:08:42.512554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-11-06 14:08:42.512841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-11-06 14:08:42.512848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-11-06 14:08:42.513200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-11-06 14:08:42.513208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-11-06 14:08:42.513516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-11-06 14:08:42.513524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-11-06 14:08:42.513839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-11-06 14:08:42.513846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-11-06 14:08:42.514130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-11-06 14:08:42.514137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-11-06 14:08:42.514426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-11-06 14:08:42.514434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-11-06 14:08:42.514755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-11-06 14:08:42.514762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-11-06 14:08:42.515068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-11-06 14:08:42.515075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-11-06 14:08:42.515407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-11-06 14:08:42.515414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-11-06 14:08:42.515634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-11-06 14:08:42.515640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-11-06 14:08:42.516013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-11-06 14:08:42.516020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-11-06 14:08:42.516293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-11-06 14:08:42.516301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-11-06 14:08:42.516596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-11-06 14:08:42.516602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-11-06 14:08:42.516933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-11-06 14:08:42.516940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-11-06 14:08:42.517256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-11-06 14:08:42.517263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-11-06 14:08:42.517645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-11-06 14:08:42.517652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-11-06 14:08:42.517938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-11-06 14:08:42.517945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-11-06 14:08:42.518266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-11-06 14:08:42.518273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-11-06 14:08:42.518559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-11-06 14:08:42.518565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-11-06 14:08:42.518745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-11-06 14:08:42.518752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-11-06 14:08:42.518911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-11-06 14:08:42.518918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-11-06 14:08:42.519219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-11-06 14:08:42.519227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-11-06 14:08:42.519405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-11-06 14:08:42.519413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-11-06 14:08:42.519711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-11-06 14:08:42.519718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-11-06 14:08:42.520018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-11-06 14:08:42.520025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-11-06 14:08:42.520315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-11-06 14:08:42.520322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-11-06 14:08:42.520673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-11-06 14:08:42.520680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-11-06 14:08:42.520853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-11-06 14:08:42.520860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-11-06 14:08:42.521076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-11-06 14:08:42.521083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-11-06 14:08:42.521367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-11-06 14:08:42.521374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-11-06 14:08:42.521673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-11-06 14:08:42.521679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-11-06 14:08:42.521974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-11-06 14:08:42.521981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-11-06 14:08:42.522264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-11-06 14:08:42.522272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-11-06 14:08:42.522581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-11-06 14:08:42.522588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-11-06 14:08:42.522874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-11-06 14:08:42.522880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-11-06 14:08:42.523167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-11-06 14:08:42.523173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-11-06 14:08:42.523515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-11-06 14:08:42.523522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-11-06 14:08:42.523856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-11-06 14:08:42.523862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-11-06 14:08:42.524152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-11-06 14:08:42.524159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-11-06 14:08:42.524466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-11-06 14:08:42.524473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-11-06 14:08:42.524757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-11-06 14:08:42.524763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-11-06 14:08:42.525072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-11-06 14:08:42.525080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-11-06 14:08:42.525373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-11-06 14:08:42.525380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-11-06 14:08:42.525734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-11-06 14:08:42.525741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-11-06 14:08:42.526042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-11-06 14:08:42.526049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-11-06 14:08:42.526352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-11-06 14:08:42.526359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-11-06 14:08:42.526647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-11-06 14:08:42.526654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-11-06 14:08:42.526938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-11-06 14:08:42.526945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-11-06 14:08:42.527257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-11-06 14:08:42.527264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-11-06 14:08:42.527584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-11-06 14:08:42.527591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-11-06 14:08:42.527877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-11-06 14:08:42.527883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-11-06 14:08:42.528046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-11-06 14:08:42.528053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-11-06 14:08:42.528220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-11-06 14:08:42.528227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-11-06 14:08:42.528411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-11-06 14:08:42.528419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-11-06 14:08:42.528715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-11-06 14:08:42.528722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-11-06 14:08:42.528876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-11-06 14:08:42.528884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-11-06 14:08:42.529189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-11-06 14:08:42.529196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-11-06 14:08:42.529488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-11-06 14:08:42.529495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-11-06 14:08:42.529674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-11-06 14:08:42.529682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-11-06 14:08:42.529986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-11-06 14:08:42.529992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-11-06 14:08:42.530274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-11-06 14:08:42.530282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-11-06 14:08:42.530581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-11-06 14:08:42.530587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-11-06 14:08:42.530874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-11-06 14:08:42.530881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-11-06 14:08:42.531246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-11-06 14:08:42.531253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-11-06 14:08:42.531544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-11-06 14:08:42.531550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-11-06 14:08:42.531844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-11-06 14:08:42.531851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-11-06 14:08:42.532139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-11-06 14:08:42.532147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-11-06 14:08:42.532451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-11-06 14:08:42.532458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-11-06 14:08:42.532765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-11-06 14:08:42.532772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-11-06 14:08:42.533149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-11-06 14:08:42.533156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-11-06 14:08:42.533331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-11-06 14:08:42.533338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-11-06 14:08:42.533660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-11-06 14:08:42.533667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-11-06 14:08:42.533961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-11-06 14:08:42.533968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-11-06 14:08:42.534157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-11-06 14:08:42.534164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-11-06 14:08:42.534459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-11-06 14:08:42.534466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-11-06 14:08:42.534636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-11-06 14:08:42.534643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-11-06 14:08:42.534921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-11-06 14:08:42.534928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-11-06 14:08:42.535253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-11-06 14:08:42.535260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-11-06 14:08:42.535559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-11-06 14:08:42.535565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-11-06 14:08:42.535860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-11-06 14:08:42.535867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-11-06 14:08:42.536157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-11-06 14:08:42.536163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-11-06 14:08:42.536463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-11-06 14:08:42.536471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-11-06 14:08:42.536815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-11-06 14:08:42.536822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-11-06 14:08:42.537016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-11-06 14:08:42.537023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-11-06 14:08:42.537322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-11-06 14:08:42.537329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-11-06 14:08:42.537732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-11-06 14:08:42.537739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-11-06 14:08:42.538057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-11-06 14:08:42.538064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-11-06 14:08:42.538347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-11-06 14:08:42.538354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-11-06 14:08:42.538657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-11-06 14:08:42.538664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-11-06 14:08:42.538956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-11-06 14:08:42.538963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-11-06 14:08:42.539268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-11-06 14:08:42.539275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-11-06 14:08:42.539428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-11-06 14:08:42.539435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-11-06 14:08:42.539752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-11-06 14:08:42.539760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-11-06 14:08:42.539956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-11-06 14:08:42.539963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-11-06 14:08:42.540230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-11-06 14:08:42.540237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-11-06 14:08:42.540500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-11-06 14:08:42.540507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-11-06 14:08:42.540801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-11-06 14:08:42.540807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-11-06 14:08:42.541109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-11-06 14:08:42.541116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-11-06 14:08:42.541399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-11-06 14:08:42.541406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-11-06 14:08:42.541717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-11-06 14:08:42.541723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-11-06 14:08:42.542029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-11-06 14:08:42.542036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.542354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.542361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.542642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.542649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.542941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.542948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.543252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.543259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.543549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.543556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.543897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.543904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.544197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.544204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.544528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.544536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.544845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.544851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.545129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.545135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.545379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.545386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.545720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.545726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.546026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.546032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.546319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.546326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.546541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.546548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.546853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.546860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.547199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.547206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.547515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.547522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.547813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.547820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.548111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.548118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.548309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.548317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.548573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.548579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.548760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.548768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.548977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.548985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.549361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.549368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.549686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.549693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.549980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.549987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.550286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.550293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.550609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.550616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.550903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.550910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.551212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.551219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.551547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.551554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.551886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.551893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.552220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.552227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.552546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.552553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.552701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.552708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.552982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.552988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-11-06 14:08:42.553303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-11-06 14:08:42.553310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.421 [2024-11-06 14:08:42.553609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-11-06 14:08:42.553616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-11-06 14:08:42.553774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-11-06 14:08:42.553782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-11-06 14:08:42.554092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-11-06 14:08:42.554099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-11-06 14:08:42.554415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-11-06 14:08:42.554422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-11-06 14:08:42.554733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-11-06 14:08:42.554740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-11-06 14:08:42.555032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-11-06 14:08:42.555038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-11-06 14:08:42.555309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-11-06 14:08:42.555316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-11-06 14:08:42.555696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-11-06 14:08:42.555703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-11-06 14:08:42.556030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-11-06 14:08:42.556036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-11-06 14:08:42.556325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-11-06 14:08:42.556334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-11-06 14:08:42.556626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-11-06 14:08:42.556633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-11-06 14:08:42.556920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-11-06 14:08:42.556927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-11-06 14:08:42.557255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-11-06 14:08:42.557262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-11-06 14:08:42.557651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-11-06 14:08:42.557658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-11-06 14:08:42.557839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-11-06 14:08:42.557846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-11-06 14:08:42.558135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-11-06 14:08:42.558142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-11-06 14:08:42.558443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-11-06 14:08:42.558450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-11-06 14:08:42.558759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-11-06 14:08:42.558766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-11-06 14:08:42.559049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-11-06 14:08:42.559056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-11-06 14:08:42.559213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-11-06 14:08:42.559221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-11-06 14:08:42.559475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-11-06 14:08:42.559483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-11-06 14:08:42.559683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-11-06 14:08:42.559690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-11-06 14:08:42.560062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-11-06 14:08:42.560069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-11-06 14:08:42.560132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-11-06 14:08:42.560140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-11-06 14:08:42.560417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-11-06 14:08:42.560424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-11-06 14:08:42.560733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-11-06 14:08:42.560740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-11-06 14:08:42.561036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-11-06 14:08:42.561042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-11-06 14:08:42.561246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-11-06 14:08:42.561254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-11-06 14:08:42.561577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-11-06 14:08:42.561584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-11-06 14:08:42.561877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-11-06 14:08:42.561884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-11-06 14:08:42.562070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-11-06 14:08:42.562076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-11-06 14:08:42.562349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-11-06 14:08:42.562357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-11-06 14:08:42.562638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-11-06 14:08:42.562645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-11-06 14:08:42.562946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-11-06 14:08:42.562953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-11-06 14:08:42.563252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-11-06 14:08:42.563260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-11-06 14:08:42.563431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-11-06 14:08:42.563438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-11-06 14:08:42.563658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-11-06 14:08:42.563665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-11-06 14:08:42.563967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-11-06 14:08:42.563974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-11-06 14:08:42.564323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-11-06 14:08:42.564331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-11-06 14:08:42.564620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-11-06 14:08:42.564626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-11-06 14:08:42.564932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-11-06 14:08:42.564938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-11-06 14:08:42.565235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-11-06 14:08:42.565241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-11-06 14:08:42.565525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-11-06 14:08:42.565532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-11-06 14:08:42.565880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-11-06 14:08:42.565886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-11-06 14:08:42.566173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-11-06 14:08:42.566180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-11-06 14:08:42.566518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-11-06 14:08:42.566525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-11-06 14:08:42.566703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-11-06 14:08:42.566710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-11-06 14:08:42.566892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-11-06 14:08:42.566899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-11-06 14:08:42.567202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-11-06 14:08:42.567210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-11-06 14:08:42.567517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-11-06 14:08:42.567528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-11-06 14:08:42.567832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-11-06 14:08:42.567839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-11-06 14:08:42.568116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-11-06 14:08:42.568123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-11-06 14:08:42.568421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-11-06 14:08:42.568427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-11-06 14:08:42.568617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-11-06 14:08:42.568623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-11-06 14:08:42.568782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-11-06 14:08:42.568789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-11-06 14:08:42.569107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-11-06 14:08:42.569113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-11-06 14:08:42.569303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-11-06 14:08:42.569310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-11-06 14:08:42.569632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-11-06 14:08:42.569638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-11-06 14:08:42.569798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-11-06 14:08:42.569805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-11-06 14:08:42.570107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-11-06 14:08:42.570114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-11-06 14:08:42.570395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-11-06 14:08:42.570402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-11-06 14:08:42.570742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-11-06 14:08:42.570749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-11-06 14:08:42.571081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-11-06 14:08:42.571088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-11-06 14:08:42.571452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-11-06 14:08:42.571460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-11-06 14:08:42.571768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-11-06 14:08:42.571775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-11-06 14:08:42.572062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-11-06 14:08:42.572069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-11-06 14:08:42.572351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-11-06 14:08:42.572358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-11-06 14:08:42.572644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-11-06 14:08:42.572651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-11-06 14:08:42.572928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-11-06 14:08:42.572935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-11-06 14:08:42.573233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-11-06 14:08:42.573239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-11-06 14:08:42.573570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-11-06 14:08:42.573577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-11-06 14:08:42.573893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-11-06 14:08:42.573899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-11-06 14:08:42.574181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-11-06 14:08:42.574188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-11-06 14:08:42.574490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-11-06 14:08:42.574497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-11-06 14:08:42.574683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-11-06 14:08:42.574693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-11-06 14:08:42.574981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-11-06 14:08:42.574988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-11-06 14:08:42.575273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-11-06 14:08:42.575280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-11-06 14:08:42.575470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-11-06 14:08:42.575476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-11-06 14:08:42.575777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-11-06 14:08:42.575784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-11-06 14:08:42.576080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-11-06 14:08:42.576087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-11-06 14:08:42.576406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-11-06 14:08:42.576413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-11-06 14:08:42.576578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-11-06 14:08:42.576585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-11-06 14:08:42.576891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-11-06 14:08:42.576897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-11-06 14:08:42.577187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-11-06 14:08:42.577194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-11-06 14:08:42.577483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-11-06 14:08:42.577490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-11-06 14:08:42.577810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-11-06 14:08:42.577818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-11-06 14:08:42.578007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-11-06 14:08:42.578014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-11-06 14:08:42.578329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-11-06 14:08:42.578337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-11-06 14:08:42.578642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-11-06 14:08:42.578649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-11-06 14:08:42.578981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-11-06 14:08:42.578989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-11-06 14:08:42.579316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-11-06 14:08:42.579323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-11-06 14:08:42.579611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-11-06 14:08:42.579618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-11-06 14:08:42.579825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-11-06 14:08:42.579831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-11-06 14:08:42.580139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-11-06 14:08:42.580146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-11-06 14:08:42.580446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-11-06 14:08:42.580454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-11-06 14:08:42.580739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-11-06 14:08:42.580746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-11-06 14:08:42.581028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-11-06 14:08:42.581035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-11-06 14:08:42.581319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-11-06 14:08:42.581326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-11-06 14:08:42.581610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-11-06 14:08:42.581616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-11-06 14:08:42.581653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-11-06 14:08:42.581660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-11-06 14:08:42.581936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-11-06 14:08:42.581943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-11-06 14:08:42.582251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-11-06 14:08:42.582257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-11-06 14:08:42.582532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-11-06 14:08:42.582538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-11-06 14:08:42.582842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-11-06 14:08:42.582849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-11-06 14:08:42.583143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-11-06 14:08:42.583150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-11-06 14:08:42.583440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-11-06 14:08:42.583447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-11-06 14:08:42.583758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-11-06 14:08:42.583765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-11-06 14:08:42.584059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-11-06 14:08:42.584066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-11-06 14:08:42.584431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-11-06 14:08:42.584438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-11-06 14:08:42.584724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-11-06 14:08:42.584730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-11-06 14:08:42.585027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-11-06 14:08:42.585033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-11-06 14:08:42.585187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-11-06 14:08:42.585194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-11-06 14:08:42.585507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-11-06 14:08:42.585514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-11-06 14:08:42.585836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-11-06 14:08:42.585842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-11-06 14:08:42.586133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-11-06 14:08:42.586140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-11-06 14:08:42.586447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-11-06 14:08:42.586454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-11-06 14:08:42.586755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-11-06 14:08:42.586761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-11-06 14:08:42.587020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-11-06 14:08:42.587027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-11-06 14:08:42.587323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-11-06 14:08:42.587330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-11-06 14:08:42.587618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-11-06 14:08:42.587624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-11-06 14:08:42.587918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-11-06 14:08:42.587924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-11-06 14:08:42.588214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-11-06 14:08:42.588221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-11-06 14:08:42.588433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-11-06 14:08:42.588440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-11-06 14:08:42.588760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-11-06 14:08:42.588767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-11-06 14:08:42.589066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-11-06 14:08:42.589073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-11-06 14:08:42.589356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-11-06 14:08:42.589363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-11-06 14:08:42.589530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-11-06 14:08:42.589537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-11-06 14:08:42.589854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-11-06 14:08:42.589860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-11-06 14:08:42.590147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-11-06 14:08:42.590153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-11-06 14:08:42.590440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-11-06 14:08:42.590449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-11-06 14:08:42.590750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-11-06 14:08:42.590757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-11-06 14:08:42.591053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-11-06 14:08:42.591060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-11-06 14:08:42.591257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-11-06 14:08:42.591265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-11-06 14:08:42.591592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-11-06 14:08:42.591599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-11-06 14:08:42.591910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.591917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.592108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.592115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.592250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.592257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.592560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.592567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.592739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.592747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.593077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.593084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.593387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.593394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.593765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.593771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.594062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.594069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.594366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.594373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.594715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.594722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.594924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.594931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.595227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.595234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.595530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.595537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.595728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.595735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.596047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.596054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.596357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.596364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.596681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.596688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.597005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.597011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.597300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.597307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.597480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.597487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.597599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.597605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.597931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.597938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.598303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.598311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.598606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.598613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.598914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.598921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.599199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.599206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.599387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.599395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.599732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.599738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.599937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.599944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.600254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.600261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.600586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.600592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.600936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.600942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.601117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.601124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.601308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.601315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.601626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.601634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.601818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.601825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.602115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-11-06 14:08:42.602122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-11-06 14:08:42.602414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-11-06 14:08:42.602421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-11-06 14:08:42.602721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-11-06 14:08:42.602728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-11-06 14:08:42.602923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-11-06 14:08:42.602930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-11-06 14:08:42.603119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-11-06 14:08:42.603127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-11-06 14:08:42.603426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-11-06 14:08:42.603433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-11-06 14:08:42.603720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-11-06 14:08:42.603727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-11-06 14:08:42.604029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-11-06 14:08:42.604036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-11-06 14:08:42.604326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-11-06 14:08:42.604334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-11-06 14:08:42.604609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-11-06 14:08:42.604616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-11-06 14:08:42.604807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-11-06 14:08:42.604814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-11-06 14:08:42.605107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-11-06 14:08:42.605114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-11-06 14:08:42.605400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-11-06 14:08:42.605407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-11-06 14:08:42.605627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-11-06 14:08:42.605634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-11-06 14:08:42.605937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-11-06 14:08:42.605944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-11-06 14:08:42.606233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-11-06 14:08:42.606240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-11-06 14:08:42.606528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-11-06 14:08:42.606535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-11-06 14:08:42.606842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-11-06 14:08:42.606849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-11-06 14:08:42.607137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-11-06 14:08:42.607144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-11-06 14:08:42.607430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-11-06 14:08:42.607437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-11-06 14:08:42.607744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-11-06 14:08:42.607750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-11-06 14:08:42.608037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-11-06 14:08:42.608043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-11-06 14:08:42.608194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-11-06 14:08:42.608201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-11-06 14:08:42.608509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-11-06 14:08:42.608517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-11-06 14:08:42.608817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-11-06 14:08:42.608824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-11-06 14:08:42.609112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-11-06 14:08:42.609120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-11-06 14:08:42.609412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-11-06 14:08:42.609419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-11-06 14:08:42.609741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-11-06 14:08:42.609747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-11-06 14:08:42.610033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-11-06 14:08:42.610039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-11-06 14:08:42.610339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-11-06 14:08:42.610346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-11-06 14:08:42.610665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-11-06 14:08:42.610672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-11-06 14:08:42.611040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-11-06 14:08:42.611047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-11-06 14:08:42.611349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-11-06 14:08:42.611357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-11-06 14:08:42.611661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-11-06 14:08:42.611669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-11-06 14:08:42.611955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-11-06 14:08:42.611963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-11-06 14:08:42.612266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-11-06 14:08:42.612273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-11-06 14:08:42.612435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-11-06 14:08:42.612442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-11-06 14:08:42.612733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-11-06 14:08:42.612740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.427 [2024-11-06 14:08:42.613130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-11-06 14:08:42.613139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-11-06 14:08:42.613417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-11-06 14:08:42.613424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-11-06 14:08:42.613693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-11-06 14:08:42.613700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-11-06 14:08:42.614051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-11-06 14:08:42.614058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-11-06 14:08:42.614210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-11-06 14:08:42.614217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-11-06 14:08:42.614448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-11-06 14:08:42.614455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-11-06 14:08:42.614773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-11-06 14:08:42.614779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-11-06 14:08:42.615072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-11-06 14:08:42.615079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-11-06 14:08:42.615266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-11-06 14:08:42.615273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-11-06 14:08:42.615631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-11-06 14:08:42.615638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-11-06 14:08:42.615945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-11-06 14:08:42.615952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-11-06 14:08:42.616264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-11-06 14:08:42.616271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-11-06 14:08:42.616423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-11-06 14:08:42.616430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-11-06 14:08:42.616739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-11-06 14:08:42.616746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-11-06 14:08:42.617059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-11-06 14:08:42.617066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-11-06 14:08:42.617352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-11-06 14:08:42.617359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-11-06 14:08:42.617714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-11-06 14:08:42.617721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-11-06 14:08:42.618066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-11-06 14:08:42.618073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-11-06 14:08:42.618161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-11-06 14:08:42.618167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-11-06 14:08:42.618430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-11-06 14:08:42.618437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-11-06 14:08:42.618779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-11-06 14:08:42.618786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-11-06 14:08:42.619100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-11-06 14:08:42.619107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-11-06 14:08:42.619413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-11-06 14:08:42.619420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-11-06 14:08:42.619720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-11-06 14:08:42.619726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-11-06 14:08:42.620045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-11-06 14:08:42.620052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-11-06 14:08:42.620352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-11-06 14:08:42.620359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-11-06 14:08:42.620535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-11-06 14:08:42.620541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-11-06 14:08:42.620690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-11-06 14:08:42.620697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-11-06 14:08:42.620883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-11-06 14:08:42.620890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-11-06 14:08:42.621166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-11-06 14:08:42.621172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-11-06 14:08:42.621448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-11-06 14:08:42.621455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-11-06 14:08:42.621633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-11-06 14:08:42.621640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-11-06 14:08:42.621941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-11-06 14:08:42.621948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-11-06 14:08:42.622233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-11-06 14:08:42.622240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-11-06 14:08:42.622506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-11-06 14:08:42.622513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-11-06 14:08:42.622813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-11-06 14:08:42.622819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-11-06 14:08:42.623126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-11-06 14:08:42.623132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-11-06 14:08:42.623432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-11-06 14:08:42.623439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-11-06 14:08:42.623755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-11-06 14:08:42.623762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-11-06 14:08:42.624058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-11-06 14:08:42.624065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-11-06 14:08:42.624252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-11-06 14:08:42.624262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-11-06 14:08:42.624441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-11-06 14:08:42.624447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-11-06 14:08:42.624789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-11-06 14:08:42.624796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-11-06 14:08:42.625073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-11-06 14:08:42.625080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-11-06 14:08:42.625375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-11-06 14:08:42.625383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-11-06 14:08:42.625692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-11-06 14:08:42.625699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-11-06 14:08:42.625993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-11-06 14:08:42.626001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-11-06 14:08:42.626298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-11-06 14:08:42.626305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-11-06 14:08:42.626551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-11-06 14:08:42.626558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-11-06 14:08:42.626923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-11-06 14:08:42.626930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-11-06 14:08:42.627235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-11-06 14:08:42.627242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-11-06 14:08:42.627454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-11-06 14:08:42.627461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-11-06 14:08:42.627779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-11-06 14:08:42.627786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-11-06 14:08:42.628128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-11-06 14:08:42.628135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-11-06 14:08:42.628409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-11-06 14:08:42.628416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-11-06 14:08:42.628729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-11-06 14:08:42.628736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-11-06 14:08:42.629023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-11-06 14:08:42.629030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-11-06 14:08:42.629317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-11-06 14:08:42.629324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-11-06 14:08:42.629686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-11-06 14:08:42.629693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-11-06 14:08:42.630011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-11-06 14:08:42.630019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-11-06 14:08:42.630370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-11-06 14:08:42.630378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-11-06 14:08:42.630654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-11-06 14:08:42.630661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-11-06 14:08:42.630972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-11-06 14:08:42.630979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-11-06 14:08:42.631143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-11-06 14:08:42.631149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-11-06 14:08:42.631504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-11-06 14:08:42.631511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-11-06 14:08:42.631686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-11-06 14:08:42.631694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-11-06 14:08:42.631979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-11-06 14:08:42.631986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-11-06 14:08:42.632222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-11-06 14:08:42.632229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-11-06 14:08:42.632379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-11-06 14:08:42.632386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-11-06 14:08:42.632655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-11-06 14:08:42.632662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-11-06 14:08:42.632842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-11-06 14:08:42.632849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-11-06 14:08:42.633121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-11-06 14:08:42.633127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-11-06 14:08:42.633397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-11-06 14:08:42.633404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-11-06 14:08:42.633692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-11-06 14:08:42.633699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-11-06 14:08:42.634046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-11-06 14:08:42.634053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-11-06 14:08:42.634407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-11-06 14:08:42.634414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-11-06 14:08:42.634694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-11-06 14:08:42.634702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-11-06 14:08:42.635008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-11-06 14:08:42.635016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-11-06 14:08:42.635306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-11-06 14:08:42.635313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-11-06 14:08:42.635668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-11-06 14:08:42.635675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-11-06 14:08:42.635968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-11-06 14:08:42.635976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-11-06 14:08:42.636316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-11-06 14:08:42.636323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-11-06 14:08:42.636672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-11-06 14:08:42.636678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-11-06 14:08:42.636965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-11-06 14:08:42.636972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-11-06 14:08:42.637164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-11-06 14:08:42.637170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-11-06 14:08:42.637494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-11-06 14:08:42.637501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-11-06 14:08:42.637887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-11-06 14:08:42.637894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-11-06 14:08:42.638215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-11-06 14:08:42.638221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-11-06 14:08:42.638491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-11-06 14:08:42.638498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-11-06 14:08:42.638697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-11-06 14:08:42.638703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-11-06 14:08:42.639037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-11-06 14:08:42.639043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-11-06 14:08:42.639346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-11-06 14:08:42.639353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-11-06 14:08:42.639598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-11-06 14:08:42.639605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-11-06 14:08:42.639913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-11-06 14:08:42.639919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-11-06 14:08:42.640234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-11-06 14:08:42.640241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-11-06 14:08:42.640427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-11-06 14:08:42.640434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-11-06 14:08:42.640730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-11-06 14:08:42.640737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-11-06 14:08:42.641032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-11-06 14:08:42.641039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-11-06 14:08:42.641342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-11-06 14:08:42.641350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-11-06 14:08:42.641636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-11-06 14:08:42.641642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-11-06 14:08:42.641805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-11-06 14:08:42.641813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-11-06 14:08:42.642091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-11-06 14:08:42.642097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-11-06 14:08:42.642389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.642396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-11-06 14:08:42.642678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.642684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-11-06 14:08:42.642954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.642960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-11-06 14:08:42.643252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.643259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-11-06 14:08:42.643557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.643564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-11-06 14:08:42.643850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.643857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-11-06 14:08:42.644144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.644151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-11-06 14:08:42.644471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.644478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-11-06 14:08:42.644760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.644767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-11-06 14:08:42.645067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.645073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-11-06 14:08:42.645357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.645364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-11-06 14:08:42.645514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.645521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-11-06 14:08:42.645883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.645889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-11-06 14:08:42.645990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.645997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-11-06 14:08:42.646157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.646164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-11-06 14:08:42.646442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.646449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-11-06 14:08:42.646754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.646761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-11-06 14:08:42.647073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.647079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-11-06 14:08:42.647277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.647286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-11-06 14:08:42.647545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.647551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-11-06 14:08:42.647837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.647843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-11-06 14:08:42.648135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.648142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-11-06 14:08:42.648453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.648460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-11-06 14:08:42.648759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.648766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-11-06 14:08:42.649056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.649063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-11-06 14:08:42.649371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.649378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-11-06 14:08:42.649669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.649676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-11-06 14:08:42.649966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.649973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-11-06 14:08:42.650259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.650266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-11-06 14:08:42.650563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.650570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-11-06 14:08:42.650849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.650856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-11-06 14:08:42.651172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.651179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-11-06 14:08:42.651346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.651353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-11-06 14:08:42.651530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.651538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-11-06 14:08:42.651850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.651857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-11-06 14:08:42.652166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.652172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-11-06 14:08:42.652491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.652498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-11-06 14:08:42.652808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-11-06 14:08:42.652815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.653135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.653142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.653489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.653496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.653763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.653770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.654071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.654077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.654243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.654253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.654631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.654638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.654919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.654926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.655225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.655231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.655623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.655631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.655914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.655921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.656269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.656276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.656454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.656461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.656788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.656794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.657148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.657155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.657454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.657462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.657585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.657592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.657946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.657952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.658253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.658260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.658600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.658607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.658895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.658901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.659087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.659095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.659308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.659315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.659600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.659606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.659892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.659898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.660085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.660092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.660365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.660372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.660661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.660668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.660978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.660984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.661294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.661301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.661604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.661611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.661912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.661919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.662220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.662227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.662522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.662528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.662849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.662856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.663142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.663149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.663457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.663464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.663755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.663762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-11-06 14:08:42.664050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-11-06 14:08:42.664057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.432 [2024-11-06 14:08:42.664434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-11-06 14:08:42.664442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-11-06 14:08:42.664754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-11-06 14:08:42.664761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-11-06 14:08:42.665004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-11-06 14:08:42.665011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-11-06 14:08:42.665330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-11-06 14:08:42.665337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-11-06 14:08:42.665631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-11-06 14:08:42.665638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-11-06 14:08:42.665923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-11-06 14:08:42.665930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-11-06 14:08:42.666092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-11-06 14:08:42.666099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-11-06 14:08:42.666420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-11-06 14:08:42.666427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-11-06 14:08:42.666717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-11-06 14:08:42.666724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-11-06 14:08:42.667017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-11-06 14:08:42.667024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-11-06 14:08:42.667314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-11-06 14:08:42.667321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-11-06 14:08:42.667635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-11-06 14:08:42.667642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-11-06 14:08:42.667928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-11-06 14:08:42.667935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-11-06 14:08:42.668221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-11-06 14:08:42.668228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-11-06 14:08:42.668516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-11-06 14:08:42.668523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-11-06 14:08:42.668816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-11-06 14:08:42.668823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-11-06 14:08:42.669124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-11-06 14:08:42.669132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-11-06 14:08:42.669434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-11-06 14:08:42.669441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-11-06 14:08:42.669724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-11-06 14:08:42.669730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-11-06 14:08:42.670040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-11-06 14:08:42.670047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.717 [2024-11-06 14:08:42.670392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.717 [2024-11-06 14:08:42.670401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.717 qpair failed and we were unable to recover it. 00:25:03.717 [2024-11-06 14:08:42.670713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.717 [2024-11-06 14:08:42.670720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.717 qpair failed and we were unable to recover it. 00:25:03.717 [2024-11-06 14:08:42.671019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.717 [2024-11-06 14:08:42.671028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.717 qpair failed and we were unable to recover it. 00:25:03.717 [2024-11-06 14:08:42.671316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.717 [2024-11-06 14:08:42.671323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.717 qpair failed and we were unable to recover it. 00:25:03.717 [2024-11-06 14:08:42.671675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.717 [2024-11-06 14:08:42.671682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.717 qpair failed and we were unable to recover it. 00:25:03.717 [2024-11-06 14:08:42.671968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.717 [2024-11-06 14:08:42.671975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.717 qpair failed and we were unable to recover it. 00:25:03.717 [2024-11-06 14:08:42.672154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.717 [2024-11-06 14:08:42.672161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.717 qpair failed and we were unable to recover it. 00:25:03.717 [2024-11-06 14:08:42.672398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.717 [2024-11-06 14:08:42.672405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.717 qpair failed and we were unable to recover it. 00:25:03.717 [2024-11-06 14:08:42.672718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.717 [2024-11-06 14:08:42.672724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.717 qpair failed and we were unable to recover it. 00:25:03.717 [2024-11-06 14:08:42.673009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.717 [2024-11-06 14:08:42.673015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.717 qpair failed and we were unable to recover it. 00:25:03.717 [2024-11-06 14:08:42.673324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.717 [2024-11-06 14:08:42.673332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.717 qpair failed and we were unable to recover it. 00:25:03.717 [2024-11-06 14:08:42.673520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.717 [2024-11-06 14:08:42.673527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.717 qpair failed and we were unable to recover it. 00:25:03.717 [2024-11-06 14:08:42.673872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.717 [2024-11-06 14:08:42.673878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.717 qpair failed and we were unable to recover it. 00:25:03.717 [2024-11-06 14:08:42.674119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.718 [2024-11-06 14:08:42.674125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.718 qpair failed and we were unable to recover it. 00:25:03.718 [2024-11-06 14:08:42.674504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.718 [2024-11-06 14:08:42.674510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.718 qpair failed and we were unable to recover it. 00:25:03.718 [2024-11-06 14:08:42.674804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.718 [2024-11-06 14:08:42.674811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.718 qpair failed and we were unable to recover it. 00:25:03.718 [2024-11-06 14:08:42.675122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.718 [2024-11-06 14:08:42.675129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.718 qpair failed and we were unable to recover it. 00:25:03.718 [2024-11-06 14:08:42.675406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.718 [2024-11-06 14:08:42.675412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.718 qpair failed and we were unable to recover it. 00:25:03.718 [2024-11-06 14:08:42.675701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.718 [2024-11-06 14:08:42.675708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.718 qpair failed and we were unable to recover it. 00:25:03.718 [2024-11-06 14:08:42.676059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.718 [2024-11-06 14:08:42.676066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.718 qpair failed and we were unable to recover it. 00:25:03.718 [2024-11-06 14:08:42.676263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.718 [2024-11-06 14:08:42.676270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.718 qpair failed and we were unable to recover it. 00:25:03.718 [2024-11-06 14:08:42.676601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.718 [2024-11-06 14:08:42.676607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.718 qpair failed and we were unable to recover it. 00:25:03.718 [2024-11-06 14:08:42.676845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.718 [2024-11-06 14:08:42.676852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.718 qpair failed and we were unable to recover it. 00:25:03.718 [2024-11-06 14:08:42.677261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.718 [2024-11-06 14:08:42.677268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.718 qpair failed and we were unable to recover it. 00:25:03.718 [2024-11-06 14:08:42.677560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.718 [2024-11-06 14:08:42.677567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.718 qpair failed and we were unable to recover it. 00:25:03.718 [2024-11-06 14:08:42.677853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.718 [2024-11-06 14:08:42.677859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.718 qpair failed and we were unable to recover it. 00:25:03.718 [2024-11-06 14:08:42.678195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.718 [2024-11-06 14:08:42.678202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.718 qpair failed and we were unable to recover it. 00:25:03.718 [2024-11-06 14:08:42.678380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.718 [2024-11-06 14:08:42.678388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.718 qpair failed and we were unable to recover it. 00:25:03.718 [2024-11-06 14:08:42.678665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.718 [2024-11-06 14:08:42.678671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.718 qpair failed and we were unable to recover it. 00:25:03.718 [2024-11-06 14:08:42.678850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.718 [2024-11-06 14:08:42.678858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.718 qpair failed and we were unable to recover it. 00:25:03.718 [2024-11-06 14:08:42.679194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.718 [2024-11-06 14:08:42.679200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.718 qpair failed and we were unable to recover it. 00:25:03.718 [2024-11-06 14:08:42.679371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.718 [2024-11-06 14:08:42.679379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.718 qpair failed and we were unable to recover it. 00:25:03.718 [2024-11-06 14:08:42.679726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.718 [2024-11-06 14:08:42.679733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.718 qpair failed and we were unable to recover it. 00:25:03.718 [2024-11-06 14:08:42.680028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.718 [2024-11-06 14:08:42.680034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.718 qpair failed and we were unable to recover it. 00:25:03.718 [2024-11-06 14:08:42.680325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.718 [2024-11-06 14:08:42.680332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.718 qpair failed and we were unable to recover it. 00:25:03.718 [2024-11-06 14:08:42.680680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.718 [2024-11-06 14:08:42.680687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.718 qpair failed and we were unable to recover it. 00:25:03.718 [2024-11-06 14:08:42.681022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.719 [2024-11-06 14:08:42.681028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.719 qpair failed and we were unable to recover it. 00:25:03.719 [2024-11-06 14:08:42.681316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.719 [2024-11-06 14:08:42.681323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.719 qpair failed and we were unable to recover it. 00:25:03.719 [2024-11-06 14:08:42.681703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.719 [2024-11-06 14:08:42.681711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.719 qpair failed and we were unable to recover it. 00:25:03.719 [2024-11-06 14:08:42.682012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.719 [2024-11-06 14:08:42.682019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.719 qpair failed and we were unable to recover it. 00:25:03.719 [2024-11-06 14:08:42.682355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.719 [2024-11-06 14:08:42.682362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.719 qpair failed and we were unable to recover it. 00:25:03.719 [2024-11-06 14:08:42.682680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.719 [2024-11-06 14:08:42.682686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.719 qpair failed and we were unable to recover it. 00:25:03.719 [2024-11-06 14:08:42.682995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.719 [2024-11-06 14:08:42.683006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.719 qpair failed and we were unable to recover it. 00:25:03.719 [2024-11-06 14:08:42.683322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.719 [2024-11-06 14:08:42.683329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.719 qpair failed and we were unable to recover it. 00:25:03.719 [2024-11-06 14:08:42.683621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.719 [2024-11-06 14:08:42.683628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.719 qpair failed and we were unable to recover it. 00:25:03.719 [2024-11-06 14:08:42.683916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.719 [2024-11-06 14:08:42.683923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.719 qpair failed and we were unable to recover it. 00:25:03.719 [2024-11-06 14:08:42.684232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.719 [2024-11-06 14:08:42.684239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.719 qpair failed and we were unable to recover it. 00:25:03.719 [2024-11-06 14:08:42.684298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.719 [2024-11-06 14:08:42.684305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.719 qpair failed and we were unable to recover it. 00:25:03.719 [2024-11-06 14:08:42.684599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.719 [2024-11-06 14:08:42.684606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.719 qpair failed and we were unable to recover it. 00:25:03.719 [2024-11-06 14:08:42.684898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.719 [2024-11-06 14:08:42.684905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.719 qpair failed and we were unable to recover it. 00:25:03.719 [2024-11-06 14:08:42.685191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.719 [2024-11-06 14:08:42.685199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.719 qpair failed and we were unable to recover it. 00:25:03.719 [2024-11-06 14:08:42.685356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.719 [2024-11-06 14:08:42.685364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.719 qpair failed and we were unable to recover it. 00:25:03.719 [2024-11-06 14:08:42.685640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.719 [2024-11-06 14:08:42.685647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.719 qpair failed and we were unable to recover it. 00:25:03.719 [2024-11-06 14:08:42.685936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.719 [2024-11-06 14:08:42.685943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.719 qpair failed and we were unable to recover it. 00:25:03.719 [2024-11-06 14:08:42.686248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.719 [2024-11-06 14:08:42.686255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.719 qpair failed and we were unable to recover it. 00:25:03.719 [2024-11-06 14:08:42.686420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.719 [2024-11-06 14:08:42.686426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.719 qpair failed and we were unable to recover it. 00:25:03.719 [2024-11-06 14:08:42.686712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.719 [2024-11-06 14:08:42.686719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.719 qpair failed and we were unable to recover it. 00:25:03.719 [2024-11-06 14:08:42.687015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.719 [2024-11-06 14:08:42.687022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.719 qpair failed and we were unable to recover it. 00:25:03.719 [2024-11-06 14:08:42.687336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.719 [2024-11-06 14:08:42.687343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.719 qpair failed and we were unable to recover it. 00:25:03.720 [2024-11-06 14:08:42.687667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.720 [2024-11-06 14:08:42.687674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.720 qpair failed and we were unable to recover it. 00:25:03.720 [2024-11-06 14:08:42.687964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.720 [2024-11-06 14:08:42.687971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.720 qpair failed and we were unable to recover it. 00:25:03.720 [2024-11-06 14:08:42.688272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.720 [2024-11-06 14:08:42.688279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.720 qpair failed and we were unable to recover it. 00:25:03.720 [2024-11-06 14:08:42.688458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.720 [2024-11-06 14:08:42.688465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.720 qpair failed and we were unable to recover it. 00:25:03.720 [2024-11-06 14:08:42.688650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.720 [2024-11-06 14:08:42.688657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.720 qpair failed and we were unable to recover it. 00:25:03.720 [2024-11-06 14:08:42.688991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.720 [2024-11-06 14:08:42.688997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.720 qpair failed and we were unable to recover it. 00:25:03.720 [2024-11-06 14:08:42.689285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.720 [2024-11-06 14:08:42.689292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.720 qpair failed and we were unable to recover it. 00:25:03.720 [2024-11-06 14:08:42.689559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.720 [2024-11-06 14:08:42.689566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.720 qpair failed and we were unable to recover it. 00:25:03.720 [2024-11-06 14:08:42.689651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.720 [2024-11-06 14:08:42.689657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.720 qpair failed and we were unable to recover it. 00:25:03.720 [2024-11-06 14:08:42.689938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.720 [2024-11-06 14:08:42.689945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.720 qpair failed and we were unable to recover it. 00:25:03.720 [2024-11-06 14:08:42.690247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.720 [2024-11-06 14:08:42.690255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.720 qpair failed and we were unable to recover it. 00:25:03.720 [2024-11-06 14:08:42.690538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.720 [2024-11-06 14:08:42.690544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.720 qpair failed and we were unable to recover it. 00:25:03.720 [2024-11-06 14:08:42.690828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.720 [2024-11-06 14:08:42.690835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.720 qpair failed and we were unable to recover it. 00:25:03.720 [2024-11-06 14:08:42.691134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.720 [2024-11-06 14:08:42.691141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.720 qpair failed and we were unable to recover it. 00:25:03.720 [2024-11-06 14:08:42.691428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.720 [2024-11-06 14:08:42.691436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.720 qpair failed and we were unable to recover it. 00:25:03.720 [2024-11-06 14:08:42.691718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.720 [2024-11-06 14:08:42.691725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.720 qpair failed and we were unable to recover it. 00:25:03.720 [2024-11-06 14:08:42.692032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.720 [2024-11-06 14:08:42.692039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.720 qpair failed and we were unable to recover it. 00:25:03.720 [2024-11-06 14:08:42.692327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.720 [2024-11-06 14:08:42.692334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.720 qpair failed and we were unable to recover it. 00:25:03.720 [2024-11-06 14:08:42.692674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.720 [2024-11-06 14:08:42.692681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.720 qpair failed and we were unable to recover it. 00:25:03.720 [2024-11-06 14:08:42.693028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.720 [2024-11-06 14:08:42.693035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.720 qpair failed and we were unable to recover it. 00:25:03.720 [2024-11-06 14:08:42.693336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.720 [2024-11-06 14:08:42.693343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.720 qpair failed and we were unable to recover it. 00:25:03.720 [2024-11-06 14:08:42.693535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.720 [2024-11-06 14:08:42.693542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.720 qpair failed and we were unable to recover it. 00:25:03.720 [2024-11-06 14:08:42.693836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.721 [2024-11-06 14:08:42.693842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.721 qpair failed and we were unable to recover it. 00:25:03.721 [2024-11-06 14:08:42.694136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.721 [2024-11-06 14:08:42.694144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.721 qpair failed and we were unable to recover it. 00:25:03.721 [2024-11-06 14:08:42.694445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.721 [2024-11-06 14:08:42.694452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.721 qpair failed and we were unable to recover it. 00:25:03.721 [2024-11-06 14:08:42.694825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.721 [2024-11-06 14:08:42.694831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.721 qpair failed and we were unable to recover it. 00:25:03.721 [2024-11-06 14:08:42.695169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.721 [2024-11-06 14:08:42.695175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.721 qpair failed and we were unable to recover it. 00:25:03.721 [2024-11-06 14:08:42.695505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.721 [2024-11-06 14:08:42.695512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.721 qpair failed and we were unable to recover it. 00:25:03.721 [2024-11-06 14:08:42.695652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.721 [2024-11-06 14:08:42.695659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.721 qpair failed and we were unable to recover it. 00:25:03.721 [2024-11-06 14:08:42.695965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.721 [2024-11-06 14:08:42.695972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.721 qpair failed and we were unable to recover it. 00:25:03.721 [2024-11-06 14:08:42.696136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.721 [2024-11-06 14:08:42.696143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.721 qpair failed and we were unable to recover it. 00:25:03.721 [2024-11-06 14:08:42.696420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.721 [2024-11-06 14:08:42.696427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.721 qpair failed and we were unable to recover it. 00:25:03.721 [2024-11-06 14:08:42.696708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.721 [2024-11-06 14:08:42.696714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.721 qpair failed and we were unable to recover it. 00:25:03.721 [2024-11-06 14:08:42.696911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.721 [2024-11-06 14:08:42.696918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.721 qpair failed and we were unable to recover it. 00:25:03.721 [2024-11-06 14:08:42.697241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.721 [2024-11-06 14:08:42.697251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.721 qpair failed and we were unable to recover it. 00:25:03.721 [2024-11-06 14:08:42.697604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.721 [2024-11-06 14:08:42.697611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.721 qpair failed and we were unable to recover it. 00:25:03.721 [2024-11-06 14:08:42.697928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.721 [2024-11-06 14:08:42.697935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.721 qpair failed and we were unable to recover it. 00:25:03.721 [2024-11-06 14:08:42.698124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.721 [2024-11-06 14:08:42.698131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.721 qpair failed and we were unable to recover it. 00:25:03.721 [2024-11-06 14:08:42.698329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.721 [2024-11-06 14:08:42.698336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.721 qpair failed and we were unable to recover it. 00:25:03.721 [2024-11-06 14:08:42.698531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.721 [2024-11-06 14:08:42.698538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.721 qpair failed and we were unable to recover it. 00:25:03.721 [2024-11-06 14:08:42.698886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.721 [2024-11-06 14:08:42.698892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.721 qpair failed and we were unable to recover it. 00:25:03.721 [2024-11-06 14:08:42.699039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.721 [2024-11-06 14:08:42.699047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.721 qpair failed and we were unable to recover it. 00:25:03.721 [2024-11-06 14:08:42.699326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.721 [2024-11-06 14:08:42.699333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.721 qpair failed and we were unable to recover it. 00:25:03.721 [2024-11-06 14:08:42.699670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.721 [2024-11-06 14:08:42.699677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.721 qpair failed and we were unable to recover it. 00:25:03.721 [2024-11-06 14:08:42.699981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.721 [2024-11-06 14:08:42.699989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.721 qpair failed and we were unable to recover it. 00:25:03.721 [2024-11-06 14:08:42.700281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.721 [2024-11-06 14:08:42.700288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.722 qpair failed and we were unable to recover it. 00:25:03.722 [2024-11-06 14:08:42.700576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.722 [2024-11-06 14:08:42.700582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.722 qpair failed and we were unable to recover it. 00:25:03.722 [2024-11-06 14:08:42.700739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.722 [2024-11-06 14:08:42.700746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.722 qpair failed and we were unable to recover it. 00:25:03.722 [2024-11-06 14:08:42.701087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.722 [2024-11-06 14:08:42.701094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.722 qpair failed and we were unable to recover it. 00:25:03.722 [2024-11-06 14:08:42.701373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.722 [2024-11-06 14:08:42.701380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.722 qpair failed and we were unable to recover it. 00:25:03.722 [2024-11-06 14:08:42.701654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.722 [2024-11-06 14:08:42.701663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.722 qpair failed and we were unable to recover it. 00:25:03.722 [2024-11-06 14:08:42.701964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.722 [2024-11-06 14:08:42.701971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.722 qpair failed and we were unable to recover it. 00:25:03.722 [2024-11-06 14:08:42.702287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.722 [2024-11-06 14:08:42.702295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.722 qpair failed and we were unable to recover it. 00:25:03.722 [2024-11-06 14:08:42.702606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.722 [2024-11-06 14:08:42.702613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.722 qpair failed and we were unable to recover it. 00:25:03.722 [2024-11-06 14:08:42.702796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.722 [2024-11-06 14:08:42.702803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.722 qpair failed and we were unable to recover it. 00:25:03.722 [2024-11-06 14:08:42.703068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.722 [2024-11-06 14:08:42.703076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.722 qpair failed and we were unable to recover it. 00:25:03.722 [2024-11-06 14:08:42.703359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.722 [2024-11-06 14:08:42.703367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.722 qpair failed and we were unable to recover it. 00:25:03.722 [2024-11-06 14:08:42.703644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.722 [2024-11-06 14:08:42.703651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.722 qpair failed and we were unable to recover it. 00:25:03.722 [2024-11-06 14:08:42.703948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.722 [2024-11-06 14:08:42.703955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.722 qpair failed and we were unable to recover it. 00:25:03.722 [2024-11-06 14:08:42.704249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.722 [2024-11-06 14:08:42.704257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.722 qpair failed and we were unable to recover it. 00:25:03.722 [2024-11-06 14:08:42.704661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.722 [2024-11-06 14:08:42.704668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.722 qpair failed and we were unable to recover it. 00:25:03.722 [2024-11-06 14:08:42.704983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.722 [2024-11-06 14:08:42.704989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.722 qpair failed and we were unable to recover it. 00:25:03.722 [2024-11-06 14:08:42.705270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.722 [2024-11-06 14:08:42.705277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.722 qpair failed and we were unable to recover it. 00:25:03.722 [2024-11-06 14:08:42.705589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.722 [2024-11-06 14:08:42.705596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.722 qpair failed and we were unable to recover it. 00:25:03.722 [2024-11-06 14:08:42.705944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.722 [2024-11-06 14:08:42.705951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.722 qpair failed and we were unable to recover it. 00:25:03.722 [2024-11-06 14:08:42.706241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.722 [2024-11-06 14:08:42.706255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.722 qpair failed and we were unable to recover it. 00:25:03.722 [2024-11-06 14:08:42.706557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.722 [2024-11-06 14:08:42.706564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.722 qpair failed and we were unable to recover it. 00:25:03.722 [2024-11-06 14:08:42.706805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.722 [2024-11-06 14:08:42.706811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.722 qpair failed and we were unable to recover it. 00:25:03.723 [2024-11-06 14:08:42.707136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.723 [2024-11-06 14:08:42.707142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.723 qpair failed and we were unable to recover it. 00:25:03.723 [2024-11-06 14:08:42.707321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.723 [2024-11-06 14:08:42.707328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.723 qpair failed and we were unable to recover it. 00:25:03.723 [2024-11-06 14:08:42.707653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.723 [2024-11-06 14:08:42.707659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.723 qpair failed and we were unable to recover it. 00:25:03.723 [2024-11-06 14:08:42.707948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.723 [2024-11-06 14:08:42.707954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.723 qpair failed and we were unable to recover it. 00:25:03.723 [2024-11-06 14:08:42.708316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.723 [2024-11-06 14:08:42.708324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.723 qpair failed and we were unable to recover it. 00:25:03.723 [2024-11-06 14:08:42.708609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.723 [2024-11-06 14:08:42.708615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.723 qpair failed and we were unable to recover it. 00:25:03.723 [2024-11-06 14:08:42.708944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.723 [2024-11-06 14:08:42.708951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.723 qpair failed and we were unable to recover it. 00:25:03.723 [2024-11-06 14:08:42.709237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.723 [2024-11-06 14:08:42.709246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.723 qpair failed and we were unable to recover it. 00:25:03.723 [2024-11-06 14:08:42.709460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.723 [2024-11-06 14:08:42.709468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.723 qpair failed and we were unable to recover it. 00:25:03.723 [2024-11-06 14:08:42.709747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.723 [2024-11-06 14:08:42.709754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.723 qpair failed and we were unable to recover it. 00:25:03.723 [2024-11-06 14:08:42.710054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.723 [2024-11-06 14:08:42.710061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.723 qpair failed and we were unable to recover it. 00:25:03.723 [2024-11-06 14:08:42.710359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.723 [2024-11-06 14:08:42.710366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.723 qpair failed and we were unable to recover it. 00:25:03.723 [2024-11-06 14:08:42.710654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.723 [2024-11-06 14:08:42.710660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.723 qpair failed and we were unable to recover it. 00:25:03.723 [2024-11-06 14:08:42.710970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.723 [2024-11-06 14:08:42.710976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.723 qpair failed and we were unable to recover it. 00:25:03.723 [2024-11-06 14:08:42.711315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.723 [2024-11-06 14:08:42.711322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.723 qpair failed and we were unable to recover it. 00:25:03.723 [2024-11-06 14:08:42.711490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.723 [2024-11-06 14:08:42.711497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.723 qpair failed and we were unable to recover it. 00:25:03.723 [2024-11-06 14:08:42.711820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.723 [2024-11-06 14:08:42.711827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.723 qpair failed and we were unable to recover it. 00:25:03.723 [2024-11-06 14:08:42.712148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.723 [2024-11-06 14:08:42.712155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.723 qpair failed and we were unable to recover it. 00:25:03.723 [2024-11-06 14:08:42.712464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.723 [2024-11-06 14:08:42.712471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.723 qpair failed and we were unable to recover it. 00:25:03.723 [2024-11-06 14:08:42.712768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.723 [2024-11-06 14:08:42.712774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.723 qpair failed and we were unable to recover it. 00:25:03.723 [2024-11-06 14:08:42.712921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.723 [2024-11-06 14:08:42.712928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.723 qpair failed and we were unable to recover it. 00:25:03.723 [2024-11-06 14:08:42.713150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.723 [2024-11-06 14:08:42.713157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.723 qpair failed and we were unable to recover it. 00:25:03.723 [2024-11-06 14:08:42.713366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.723 [2024-11-06 14:08:42.713375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.723 qpair failed and we were unable to recover it. 00:25:03.723 [2024-11-06 14:08:42.713552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.723 [2024-11-06 14:08:42.713560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-11-06 14:08:42.713827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-11-06 14:08:42.713834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-11-06 14:08:42.713999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-11-06 14:08:42.714007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-11-06 14:08:42.714179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-11-06 14:08:42.714186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-11-06 14:08:42.714516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-11-06 14:08:42.714523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-11-06 14:08:42.714838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-11-06 14:08:42.714845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-11-06 14:08:42.715174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-11-06 14:08:42.715181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-11-06 14:08:42.715473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-11-06 14:08:42.715480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-11-06 14:08:42.715803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-11-06 14:08:42.715811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-11-06 14:08:42.716092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-11-06 14:08:42.716099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-11-06 14:08:42.716375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-11-06 14:08:42.716382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-11-06 14:08:42.716685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-11-06 14:08:42.716692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-11-06 14:08:42.716977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-11-06 14:08:42.716984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-11-06 14:08:42.717283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-11-06 14:08:42.717290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-11-06 14:08:42.717505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-11-06 14:08:42.717512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-11-06 14:08:42.717819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-11-06 14:08:42.717826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-11-06 14:08:42.718110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-11-06 14:08:42.718117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-11-06 14:08:42.718426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-11-06 14:08:42.718433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-11-06 14:08:42.718744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-11-06 14:08:42.718750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.725 [2024-11-06 14:08:42.719046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-11-06 14:08:42.719053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-11-06 14:08:42.719373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-11-06 14:08:42.719381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-11-06 14:08:42.719693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-11-06 14:08:42.719700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-11-06 14:08:42.719989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-11-06 14:08:42.719996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-11-06 14:08:42.720149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-11-06 14:08:42.720157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-11-06 14:08:42.720460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-11-06 14:08:42.720467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-11-06 14:08:42.720756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-11-06 14:08:42.720763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-11-06 14:08:42.721052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-11-06 14:08:42.721059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-11-06 14:08:42.721354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-11-06 14:08:42.721361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-11-06 14:08:42.721671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-11-06 14:08:42.721678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-11-06 14:08:42.721925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-11-06 14:08:42.721932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-11-06 14:08:42.722207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-11-06 14:08:42.722214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-11-06 14:08:42.722429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-11-06 14:08:42.722437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-11-06 14:08:42.722749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-11-06 14:08:42.722756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-11-06 14:08:42.723078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-11-06 14:08:42.723085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-11-06 14:08:42.723374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-11-06 14:08:42.723381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-11-06 14:08:42.723670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-11-06 14:08:42.723677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-11-06 14:08:42.723963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-11-06 14:08:42.723970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-11-06 14:08:42.724281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-11-06 14:08:42.724288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-11-06 14:08:42.724574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-11-06 14:08:42.724581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-11-06 14:08:42.724880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-11-06 14:08:42.724888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-11-06 14:08:42.725013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-11-06 14:08:42.725021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 Write completed with error (sct=0, sc=8) 00:25:03.725 starting I/O failed 00:25:03.725 Read completed with error (sct=0, sc=8) 00:25:03.725 starting I/O failed 00:25:03.725 Read completed with error (sct=0, sc=8) 00:25:03.725 starting I/O failed 00:25:03.725 Write completed with error (sct=0, sc=8) 00:25:03.725 starting I/O failed 00:25:03.725 Read completed with error (sct=0, sc=8) 00:25:03.726 starting I/O failed 00:25:03.726 Read completed with error (sct=0, sc=8) 00:25:03.726 starting I/O failed 00:25:03.726 Write completed with error (sct=0, sc=8) 00:25:03.726 starting I/O failed 00:25:03.726 Write completed with error (sct=0, sc=8) 00:25:03.726 starting I/O failed 00:25:03.726 Read completed with error (sct=0, sc=8) 00:25:03.726 starting I/O failed 00:25:03.726 Write completed with error (sct=0, sc=8) 00:25:03.726 starting I/O failed 00:25:03.726 Read completed with error (sct=0, sc=8) 00:25:03.726 starting I/O failed 00:25:03.726 Write completed with error (sct=0, sc=8) 00:25:03.726 starting I/O failed 00:25:03.726 Write completed with error (sct=0, sc=8) 00:25:03.726 starting I/O failed 00:25:03.726 Read completed with error (sct=0, sc=8) 00:25:03.726 starting I/O failed 00:25:03.726 Write completed with error (sct=0, sc=8) 00:25:03.726 starting I/O failed 00:25:03.726 Write completed with error (sct=0, sc=8) 00:25:03.726 starting I/O failed 00:25:03.726 Write completed with error (sct=0, sc=8) 00:25:03.726 starting I/O failed 00:25:03.726 Read completed with error (sct=0, sc=8) 00:25:03.726 starting I/O failed 00:25:03.726 Write completed with error (sct=0, sc=8) 00:25:03.726 starting I/O failed 00:25:03.726 Write completed with error (sct=0, sc=8) 00:25:03.726 starting I/O failed 00:25:03.726 Write completed with error (sct=0, sc=8) 00:25:03.726 starting I/O failed 00:25:03.726 Write completed with error (sct=0, sc=8) 00:25:03.726 starting I/O failed 00:25:03.726 Write completed with error (sct=0, sc=8) 00:25:03.726 starting I/O failed 00:25:03.726 Read completed with error (sct=0, sc=8) 00:25:03.726 starting I/O failed 00:25:03.726 Write completed with error (sct=0, sc=8) 00:25:03.726 starting I/O failed 00:25:03.726 Read completed with error (sct=0, sc=8) 00:25:03.726 starting I/O failed 00:25:03.726 Write completed with error (sct=0, sc=8) 00:25:03.726 starting I/O failed 00:25:03.726 Read completed with error (sct=0, sc=8) 00:25:03.726 starting I/O failed 00:25:03.726 Read completed with error (sct=0, sc=8) 00:25:03.726 starting I/O failed 00:25:03.726 Write completed with error (sct=0, sc=8) 00:25:03.726 starting I/O failed 00:25:03.726 Write completed with error (sct=0, sc=8) 00:25:03.726 starting I/O failed 00:25:03.726 Read completed with error (sct=0, sc=8) 00:25:03.726 starting I/O failed 00:25:03.726 [2024-11-06 14:08:42.725742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:03.726 [2024-11-06 14:08:42.726260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-11-06 14:08:42.726316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe320000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-11-06 14:08:42.726622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-11-06 14:08:42.726654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe320000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-11-06 14:08:42.727021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-11-06 14:08:42.727051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe320000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-11-06 14:08:42.727472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-11-06 14:08:42.727500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-11-06 14:08:42.727805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-11-06 14:08:42.727814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-11-06 14:08:42.728109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-11-06 14:08:42.728118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-11-06 14:08:42.728526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-11-06 14:08:42.728556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-11-06 14:08:42.728905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-11-06 14:08:42.728915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-11-06 14:08:42.729089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-11-06 14:08:42.729097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-11-06 14:08:42.729444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-11-06 14:08:42.729452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-11-06 14:08:42.729761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-11-06 14:08:42.729770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-11-06 14:08:42.730059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-11-06 14:08:42.730067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-11-06 14:08:42.730381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-11-06 14:08:42.730390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-11-06 14:08:42.730556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-11-06 14:08:42.730563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-11-06 14:08:42.730913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-11-06 14:08:42.730921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-11-06 14:08:42.731294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-11-06 14:08:42.731303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-11-06 14:08:42.731639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-11-06 14:08:42.731647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-11-06 14:08:42.731937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-11-06 14:08:42.731945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-11-06 14:08:42.732252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-11-06 14:08:42.732260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-11-06 14:08:42.732548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-11-06 14:08:42.732556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-11-06 14:08:42.732871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-11-06 14:08:42.732879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-11-06 14:08:42.733159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-11-06 14:08:42.733167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-11-06 14:08:42.733470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-11-06 14:08:42.733479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-11-06 14:08:42.733663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-11-06 14:08:42.733670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-11-06 14:08:42.733982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-11-06 14:08:42.733990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-11-06 14:08:42.734163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-11-06 14:08:42.734171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-11-06 14:08:42.734374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-11-06 14:08:42.734382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-11-06 14:08:42.734679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-11-06 14:08:42.734685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-11-06 14:08:42.734981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-11-06 14:08:42.734988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-11-06 14:08:42.735295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-11-06 14:08:42.735302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-11-06 14:08:42.735614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-11-06 14:08:42.735621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-11-06 14:08:42.735937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-11-06 14:08:42.735947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-11-06 14:08:42.736130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-11-06 14:08:42.736138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-11-06 14:08:42.736450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-11-06 14:08:42.736457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-11-06 14:08:42.736721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-11-06 14:08:42.736727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-11-06 14:08:42.737009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-11-06 14:08:42.737017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-11-06 14:08:42.737303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-11-06 14:08:42.737311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-11-06 14:08:42.737520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-11-06 14:08:42.737527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-11-06 14:08:42.737793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-11-06 14:08:42.737800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-11-06 14:08:42.738196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-11-06 14:08:42.738204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-11-06 14:08:42.738509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-11-06 14:08:42.738517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-11-06 14:08:42.738846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-11-06 14:08:42.738853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-11-06 14:08:42.739138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-11-06 14:08:42.739145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-11-06 14:08:42.739451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-11-06 14:08:42.739459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-11-06 14:08:42.739758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-11-06 14:08:42.739766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-11-06 14:08:42.740068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-11-06 14:08:42.740075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-11-06 14:08:42.740360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-11-06 14:08:42.740367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-11-06 14:08:42.740741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-11-06 14:08:42.740748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-11-06 14:08:42.740935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-11-06 14:08:42.740942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-11-06 14:08:42.741308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-11-06 14:08:42.741315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-11-06 14:08:42.741595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-11-06 14:08:42.741601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-11-06 14:08:42.741776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-11-06 14:08:42.741784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-11-06 14:08:42.742148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-11-06 14:08:42.742155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-11-06 14:08:42.742462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-11-06 14:08:42.742469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-11-06 14:08:42.742766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-11-06 14:08:42.742773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-11-06 14:08:42.743074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-11-06 14:08:42.743081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-11-06 14:08:42.743383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-11-06 14:08:42.743390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-11-06 14:08:42.743682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-11-06 14:08:42.743689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-11-06 14:08:42.743974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-11-06 14:08:42.743982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-11-06 14:08:42.744286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-11-06 14:08:42.744294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-11-06 14:08:42.744648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-11-06 14:08:42.744655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-11-06 14:08:42.744946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-11-06 14:08:42.744953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-11-06 14:08:42.745247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-11-06 14:08:42.745254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-11-06 14:08:42.745563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-11-06 14:08:42.745570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-11-06 14:08:42.745842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-11-06 14:08:42.745850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-11-06 14:08:42.746156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-11-06 14:08:42.746163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-11-06 14:08:42.746464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-11-06 14:08:42.746471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-11-06 14:08:42.746773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-11-06 14:08:42.746780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-11-06 14:08:42.747071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-11-06 14:08:42.747079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.729 [2024-11-06 14:08:42.747370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-11-06 14:08:42.747377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-11-06 14:08:42.747671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-11-06 14:08:42.747678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-11-06 14:08:42.748014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-11-06 14:08:42.748022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-11-06 14:08:42.748309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-11-06 14:08:42.748317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-11-06 14:08:42.748669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-11-06 14:08:42.748676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-11-06 14:08:42.748978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-11-06 14:08:42.748984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-11-06 14:08:42.749274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-11-06 14:08:42.749281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-11-06 14:08:42.749578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-11-06 14:08:42.749585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-11-06 14:08:42.749943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-11-06 14:08:42.749950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-11-06 14:08:42.750238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-11-06 14:08:42.750249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-11-06 14:08:42.750543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-11-06 14:08:42.750550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-11-06 14:08:42.750846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-11-06 14:08:42.750853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-11-06 14:08:42.751162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-11-06 14:08:42.751169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-11-06 14:08:42.751454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-11-06 14:08:42.751461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-11-06 14:08:42.751769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-11-06 14:08:42.751776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-11-06 14:08:42.752068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-11-06 14:08:42.752076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-11-06 14:08:42.752346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-11-06 14:08:42.752353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-11-06 14:08:42.752680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-11-06 14:08:42.752688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-11-06 14:08:42.752897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-11-06 14:08:42.752904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-11-06 14:08:42.753196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-11-06 14:08:42.753203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-11-06 14:08:42.753507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-11-06 14:08:42.753514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-11-06 14:08:42.753863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-11-06 14:08:42.753870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-11-06 14:08:42.754161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-11-06 14:08:42.754168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-11-06 14:08:42.754492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-11-06 14:08:42.754499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-11-06 14:08:42.754746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-11-06 14:08:42.754754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-11-06 14:08:42.755066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-11-06 14:08:42.755073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.730 [2024-11-06 14:08:42.755365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-11-06 14:08:42.755372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-11-06 14:08:42.755723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-11-06 14:08:42.755730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-11-06 14:08:42.756019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-11-06 14:08:42.756026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-11-06 14:08:42.756334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-11-06 14:08:42.756342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-11-06 14:08:42.756639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-11-06 14:08:42.756646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-11-06 14:08:42.756934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-11-06 14:08:42.756941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-11-06 14:08:42.757242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-11-06 14:08:42.757261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-11-06 14:08:42.757570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-11-06 14:08:42.757577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-11-06 14:08:42.757867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-11-06 14:08:42.757874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-11-06 14:08:42.758164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-11-06 14:08:42.758170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-11-06 14:08:42.758333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-11-06 14:08:42.758340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-11-06 14:08:42.758614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-11-06 14:08:42.758620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-11-06 14:08:42.758907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-11-06 14:08:42.758914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-11-06 14:08:42.759260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-11-06 14:08:42.759267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-11-06 14:08:42.759474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-11-06 14:08:42.759481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-11-06 14:08:42.759753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-11-06 14:08:42.759759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-11-06 14:08:42.760070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-11-06 14:08:42.760078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-11-06 14:08:42.760394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-11-06 14:08:42.760401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-11-06 14:08:42.760687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-11-06 14:08:42.760694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-11-06 14:08:42.760987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-11-06 14:08:42.760994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-11-06 14:08:42.761203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-11-06 14:08:42.761210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-11-06 14:08:42.761553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-11-06 14:08:42.761560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-11-06 14:08:42.761735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-11-06 14:08:42.761741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-11-06 14:08:42.762022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-11-06 14:08:42.762029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-11-06 14:08:42.762379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-11-06 14:08:42.762386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-11-06 14:08:42.762674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-11-06 14:08:42.762680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-11-06 14:08:42.762862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-11-06 14:08:42.762869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-11-06 14:08:42.763136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-11-06 14:08:42.763143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-11-06 14:08:42.763514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-11-06 14:08:42.763521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-11-06 14:08:42.763837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-11-06 14:08:42.763844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-11-06 14:08:42.764111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-11-06 14:08:42.764118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-11-06 14:08:42.764427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-11-06 14:08:42.764434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-11-06 14:08:42.764755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-11-06 14:08:42.764762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-11-06 14:08:42.764986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-11-06 14:08:42.764993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-11-06 14:08:42.765304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-11-06 14:08:42.765310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-11-06 14:08:42.765485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-11-06 14:08:42.765492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-11-06 14:08:42.765800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-11-06 14:08:42.765807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-11-06 14:08:42.766112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-11-06 14:08:42.766119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-11-06 14:08:42.766410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-11-06 14:08:42.766417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-11-06 14:08:42.766767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-11-06 14:08:42.766774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-11-06 14:08:42.767058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-11-06 14:08:42.767065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-11-06 14:08:42.767352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-11-06 14:08:42.767359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-11-06 14:08:42.767514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-11-06 14:08:42.767521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-11-06 14:08:42.767934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-11-06 14:08:42.767941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-11-06 14:08:42.768224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-11-06 14:08:42.768231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-11-06 14:08:42.768433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-11-06 14:08:42.768440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-11-06 14:08:42.768721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-11-06 14:08:42.768728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-11-06 14:08:42.769028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-11-06 14:08:42.769034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-11-06 14:08:42.769313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-11-06 14:08:42.769320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-11-06 14:08:42.769513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-11-06 14:08:42.769520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-11-06 14:08:42.769862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-11-06 14:08:42.769869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-11-06 14:08:42.770067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-11-06 14:08:42.770074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-11-06 14:08:42.770356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-11-06 14:08:42.770363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-11-06 14:08:42.770679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-11-06 14:08:42.770685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-11-06 14:08:42.770972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-11-06 14:08:42.770979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-11-06 14:08:42.771139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-11-06 14:08:42.771146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-11-06 14:08:42.771461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-11-06 14:08:42.771470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-11-06 14:08:42.771775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-11-06 14:08:42.771782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-11-06 14:08:42.772063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-11-06 14:08:42.772070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-11-06 14:08:42.772517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-11-06 14:08:42.772524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-11-06 14:08:42.772810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-11-06 14:08:42.772817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-11-06 14:08:42.773104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-11-06 14:08:42.773111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-11-06 14:08:42.773413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-11-06 14:08:42.773420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-11-06 14:08:42.773736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-11-06 14:08:42.773743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-11-06 14:08:42.774042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-11-06 14:08:42.774049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-11-06 14:08:42.774361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-11-06 14:08:42.774368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-11-06 14:08:42.774664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-11-06 14:08:42.774672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-11-06 14:08:42.774946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-11-06 14:08:42.774953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-11-06 14:08:42.775251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-11-06 14:08:42.775258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.733 [2024-11-06 14:08:42.775543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-11-06 14:08:42.775550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-11-06 14:08:42.775939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-11-06 14:08:42.775946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-11-06 14:08:42.776136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-11-06 14:08:42.776144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-11-06 14:08:42.776482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-11-06 14:08:42.776489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-11-06 14:08:42.776808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-11-06 14:08:42.776815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-11-06 14:08:42.777102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-11-06 14:08:42.777108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-11-06 14:08:42.777412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-11-06 14:08:42.777419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-11-06 14:08:42.777720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-11-06 14:08:42.777726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-11-06 14:08:42.778006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-11-06 14:08:42.778013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-11-06 14:08:42.778172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-11-06 14:08:42.778180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-11-06 14:08:42.778492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-11-06 14:08:42.778499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-11-06 14:08:42.778785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-11-06 14:08:42.778792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-11-06 14:08:42.779109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-11-06 14:08:42.779116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-11-06 14:08:42.779415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-11-06 14:08:42.779422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-11-06 14:08:42.779736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-11-06 14:08:42.779743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-11-06 14:08:42.780036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-11-06 14:08:42.780043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-11-06 14:08:42.780397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-11-06 14:08:42.780404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-11-06 14:08:42.780746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-11-06 14:08:42.780752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-11-06 14:08:42.781053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-11-06 14:08:42.781059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-11-06 14:08:42.781364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-11-06 14:08:42.781371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-11-06 14:08:42.781682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-11-06 14:08:42.781689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-11-06 14:08:42.781977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-11-06 14:08:42.781984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.734 [2024-11-06 14:08:42.782274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-11-06 14:08:42.782281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-11-06 14:08:42.782577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-11-06 14:08:42.782584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-11-06 14:08:42.782899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-11-06 14:08:42.782906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-11-06 14:08:42.783083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-11-06 14:08:42.783089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-11-06 14:08:42.783352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-11-06 14:08:42.783359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-11-06 14:08:42.783713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-11-06 14:08:42.783722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-11-06 14:08:42.784014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-11-06 14:08:42.784021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-11-06 14:08:42.784315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-11-06 14:08:42.784322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-11-06 14:08:42.784642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-11-06 14:08:42.784649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-11-06 14:08:42.784935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-11-06 14:08:42.784942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-11-06 14:08:42.785272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-11-06 14:08:42.785279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-11-06 14:08:42.785592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-11-06 14:08:42.785599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-11-06 14:08:42.785937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-11-06 14:08:42.785944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-11-06 14:08:42.786230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-11-06 14:08:42.786236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-11-06 14:08:42.786537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-11-06 14:08:42.786544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-11-06 14:08:42.786862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-11-06 14:08:42.786869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-11-06 14:08:42.787155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-11-06 14:08:42.787161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-11-06 14:08:42.787464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-11-06 14:08:42.787471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-11-06 14:08:42.787785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-11-06 14:08:42.787792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-11-06 14:08:42.788085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-11-06 14:08:42.788092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-11-06 14:08:42.788398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-11-06 14:08:42.788405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-11-06 14:08:42.788689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-11-06 14:08:42.788696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-11-06 14:08:42.788994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-11-06 14:08:42.789000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-11-06 14:08:42.789261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-11-06 14:08:42.789268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-11-06 14:08:42.789574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-11-06 14:08:42.789580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-11-06 14:08:42.789877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-11-06 14:08:42.789884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-11-06 14:08:42.790262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-11-06 14:08:42.790269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-11-06 14:08:42.790553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-11-06 14:08:42.790559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-11-06 14:08:42.790850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-11-06 14:08:42.790857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-11-06 14:08:42.791161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-11-06 14:08:42.791168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-11-06 14:08:42.791339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-11-06 14:08:42.791346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-11-06 14:08:42.791549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-11-06 14:08:42.791556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-11-06 14:08:42.791832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-11-06 14:08:42.791839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-11-06 14:08:42.791911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-11-06 14:08:42.791918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-11-06 14:08:42.792215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-11-06 14:08:42.792222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-11-06 14:08:42.792551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-11-06 14:08:42.792558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-11-06 14:08:42.792849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-11-06 14:08:42.792856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-11-06 14:08:42.793042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-11-06 14:08:42.793049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-11-06 14:08:42.793332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-11-06 14:08:42.793339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-11-06 14:08:42.793668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-11-06 14:08:42.793674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-11-06 14:08:42.793989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-11-06 14:08:42.793996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-11-06 14:08:42.794303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-11-06 14:08:42.794311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-11-06 14:08:42.794616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-11-06 14:08:42.794623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-11-06 14:08:42.794915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-11-06 14:08:42.794921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-11-06 14:08:42.795090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-11-06 14:08:42.795097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-11-06 14:08:42.795377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-11-06 14:08:42.795386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-11-06 14:08:42.795675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-11-06 14:08:42.795682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-11-06 14:08:42.795966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-11-06 14:08:42.795973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-11-06 14:08:42.796295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-11-06 14:08:42.796302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-11-06 14:08:42.796588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-11-06 14:08:42.796595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-11-06 14:08:42.796882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-11-06 14:08:42.796890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-11-06 14:08:42.797086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-11-06 14:08:42.797093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-11-06 14:08:42.797458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-11-06 14:08:42.797465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-11-06 14:08:42.797751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-11-06 14:08:42.797758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-11-06 14:08:42.798058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-11-06 14:08:42.798065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-11-06 14:08:42.798422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-11-06 14:08:42.798429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-11-06 14:08:42.798627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-11-06 14:08:42.798634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-11-06 14:08:42.798805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-11-06 14:08:42.798812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-11-06 14:08:42.799100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-11-06 14:08:42.799107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-11-06 14:08:42.799268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-11-06 14:08:42.799276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-11-06 14:08:42.799572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-11-06 14:08:42.799579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-11-06 14:08:42.799875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-11-06 14:08:42.799882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-11-06 14:08:42.800184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-11-06 14:08:42.800191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-11-06 14:08:42.800538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-11-06 14:08:42.800545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-11-06 14:08:42.800834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-11-06 14:08:42.800840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-11-06 14:08:42.801150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-11-06 14:08:42.801157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-11-06 14:08:42.801314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-11-06 14:08:42.801322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-11-06 14:08:42.801677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-11-06 14:08:42.801684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-11-06 14:08:42.801972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-11-06 14:08:42.801979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-11-06 14:08:42.802280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-11-06 14:08:42.802287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-11-06 14:08:42.802571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-11-06 14:08:42.802577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-11-06 14:08:42.802922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-11-06 14:08:42.802929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-11-06 14:08:42.803230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-11-06 14:08:42.803238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-11-06 14:08:42.803538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-11-06 14:08:42.803545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-11-06 14:08:42.803831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-11-06 14:08:42.803839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-11-06 14:08:42.804152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-11-06 14:08:42.804159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-11-06 14:08:42.804441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-11-06 14:08:42.804448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-11-06 14:08:42.804740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-11-06 14:08:42.804747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-11-06 14:08:42.805081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-11-06 14:08:42.805088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-11-06 14:08:42.805377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-11-06 14:08:42.805384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-11-06 14:08:42.805686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-11-06 14:08:42.805693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-11-06 14:08:42.805980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-11-06 14:08:42.805986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-11-06 14:08:42.806270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-11-06 14:08:42.806277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-11-06 14:08:42.806577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-11-06 14:08:42.806584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-11-06 14:08:42.806967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-11-06 14:08:42.806974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-11-06 14:08:42.807292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-11-06 14:08:42.807300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-11-06 14:08:42.807612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-11-06 14:08:42.807618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-11-06 14:08:42.807905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-11-06 14:08:42.807911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-11-06 14:08:42.808196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-11-06 14:08:42.808203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-11-06 14:08:42.808517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-11-06 14:08:42.808525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-11-06 14:08:42.808810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-11-06 14:08:42.808816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-11-06 14:08:42.809150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-11-06 14:08:42.809156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-11-06 14:08:42.809468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-11-06 14:08:42.809475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-11-06 14:08:42.809766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-11-06 14:08:42.809772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-11-06 14:08:42.810084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-11-06 14:08:42.810091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-11-06 14:08:42.810392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-11-06 14:08:42.810400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-11-06 14:08:42.810688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-11-06 14:08:42.810695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-11-06 14:08:42.810980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-11-06 14:08:42.810987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-11-06 14:08:42.811292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-11-06 14:08:42.811299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-11-06 14:08:42.811627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-11-06 14:08:42.811633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-11-06 14:08:42.811921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-11-06 14:08:42.811928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-11-06 14:08:42.812210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-11-06 14:08:42.812216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-11-06 14:08:42.812504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-11-06 14:08:42.812512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-11-06 14:08:42.812795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-11-06 14:08:42.812802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-11-06 14:08:42.813098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-11-06 14:08:42.813105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-11-06 14:08:42.813257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-11-06 14:08:42.813265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-11-06 14:08:42.813521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-11-06 14:08:42.813528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-11-06 14:08:42.813842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-11-06 14:08:42.813849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-11-06 14:08:42.814011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-11-06 14:08:42.814018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-11-06 14:08:42.814342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-11-06 14:08:42.814350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-11-06 14:08:42.814670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-11-06 14:08:42.814676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-11-06 14:08:42.814843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-11-06 14:08:42.814850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-11-06 14:08:42.815185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-11-06 14:08:42.815193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-11-06 14:08:42.815566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-11-06 14:08:42.815573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-11-06 14:08:42.815751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-11-06 14:08:42.815758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-11-06 14:08:42.815837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-11-06 14:08:42.815843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-11-06 14:08:42.816118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-11-06 14:08:42.816125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-11-06 14:08:42.816408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-11-06 14:08:42.816415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-11-06 14:08:42.816830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-11-06 14:08:42.816837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-11-06 14:08:42.817211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-11-06 14:08:42.817218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-11-06 14:08:42.817407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-11-06 14:08:42.817415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-11-06 14:08:42.817718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-11-06 14:08:42.817725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-11-06 14:08:42.818012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-11-06 14:08:42.818019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-11-06 14:08:42.818310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-11-06 14:08:42.818317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-11-06 14:08:42.818640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-11-06 14:08:42.818647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-11-06 14:08:42.818733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-11-06 14:08:42.818739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-11-06 14:08:42.819006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-11-06 14:08:42.819013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-11-06 14:08:42.819253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-11-06 14:08:42.819260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-11-06 14:08:42.819559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-11-06 14:08:42.819566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-11-06 14:08:42.819885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-11-06 14:08:42.819892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-11-06 14:08:42.820219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-11-06 14:08:42.820225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-11-06 14:08:42.820524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-11-06 14:08:42.820531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-11-06 14:08:42.820846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-11-06 14:08:42.820853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-11-06 14:08:42.821138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-11-06 14:08:42.821145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-11-06 14:08:42.821435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-11-06 14:08:42.821442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-11-06 14:08:42.821769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-11-06 14:08:42.821775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-11-06 14:08:42.822074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-11-06 14:08:42.822081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-11-06 14:08:42.822379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-11-06 14:08:42.822387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-11-06 14:08:42.822665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-11-06 14:08:42.822672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-11-06 14:08:42.822865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-11-06 14:08:42.822872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-11-06 14:08:42.823151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-11-06 14:08:42.823158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-11-06 14:08:42.823443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-11-06 14:08:42.823450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-11-06 14:08:42.823803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-11-06 14:08:42.823809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-11-06 14:08:42.824090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-11-06 14:08:42.824097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-11-06 14:08:42.824258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-11-06 14:08:42.824266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-11-06 14:08:42.824597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-11-06 14:08:42.824603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-11-06 14:08:42.824778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-11-06 14:08:42.824786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-11-06 14:08:42.825086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-11-06 14:08:42.825093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-11-06 14:08:42.825412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-11-06 14:08:42.825419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-11-06 14:08:42.825737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-11-06 14:08:42.825743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-11-06 14:08:42.825964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-11-06 14:08:42.825971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-11-06 14:08:42.826296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-11-06 14:08:42.826303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-11-06 14:08:42.826605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-11-06 14:08:42.826613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-11-06 14:08:42.826897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-11-06 14:08:42.826904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-11-06 14:08:42.827209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-11-06 14:08:42.827216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-11-06 14:08:42.827517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-11-06 14:08:42.827524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-11-06 14:08:42.827710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-11-06 14:08:42.827717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-11-06 14:08:42.828020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-11-06 14:08:42.828027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-11-06 14:08:42.828415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-11-06 14:08:42.828422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-11-06 14:08:42.828707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-11-06 14:08:42.828714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-11-06 14:08:42.828895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-11-06 14:08:42.828902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-11-06 14:08:42.829169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-11-06 14:08:42.829175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-11-06 14:08:42.829492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-11-06 14:08:42.829499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-11-06 14:08:42.829830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-11-06 14:08:42.829836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-11-06 14:08:42.830147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-11-06 14:08:42.830154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-11-06 14:08:42.830447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-11-06 14:08:42.830455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-11-06 14:08:42.830743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-11-06 14:08:42.830750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-11-06 14:08:42.831053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-11-06 14:08:42.831059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-11-06 14:08:42.831338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-11-06 14:08:42.831346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-11-06 14:08:42.831659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-11-06 14:08:42.831666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-11-06 14:08:42.831977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-11-06 14:08:42.831983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-11-06 14:08:42.832297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-11-06 14:08:42.832305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-11-06 14:08:42.832608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-11-06 14:08:42.832615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-11-06 14:08:42.832914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-11-06 14:08:42.832921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-11-06 14:08:42.833241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-11-06 14:08:42.833251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-11-06 14:08:42.833437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-11-06 14:08:42.833444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-11-06 14:08:42.833640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-11-06 14:08:42.833647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-11-06 14:08:42.833964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-11-06 14:08:42.833971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-11-06 14:08:42.834270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-11-06 14:08:42.834278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-11-06 14:08:42.834558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-11-06 14:08:42.834565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-11-06 14:08:42.834852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-11-06 14:08:42.834858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-11-06 14:08:42.835030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-11-06 14:08:42.835037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-11-06 14:08:42.835387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-11-06 14:08:42.835394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-11-06 14:08:42.835603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-11-06 14:08:42.835610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-11-06 14:08:42.835911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-11-06 14:08:42.835918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-11-06 14:08:42.836206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-11-06 14:08:42.836213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-11-06 14:08:42.836561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-11-06 14:08:42.836567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-11-06 14:08:42.836855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-11-06 14:08:42.836862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-11-06 14:08:42.837023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-11-06 14:08:42.837030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-11-06 14:08:42.837207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-11-06 14:08:42.837213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-11-06 14:08:42.837483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-11-06 14:08:42.837490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-11-06 14:08:42.837804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-11-06 14:08:42.837810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-11-06 14:08:42.838097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-11-06 14:08:42.838106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-11-06 14:08:42.838421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-11-06 14:08:42.838428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-11-06 14:08:42.838717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-11-06 14:08:42.838724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-11-06 14:08:42.839064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-11-06 14:08:42.839071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-11-06 14:08:42.839350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-11-06 14:08:42.839357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-11-06 14:08:42.839552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-11-06 14:08:42.839559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-11-06 14:08:42.839885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-11-06 14:08:42.839892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-11-06 14:08:42.840224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-11-06 14:08:42.840231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-11-06 14:08:42.840547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-11-06 14:08:42.840554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-11-06 14:08:42.840855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-11-06 14:08:42.840862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-11-06 14:08:42.841157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-11-06 14:08:42.841163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-11-06 14:08:42.841489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-11-06 14:08:42.841496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-11-06 14:08:42.841797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-11-06 14:08:42.841804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-11-06 14:08:42.842129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-11-06 14:08:42.842136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-11-06 14:08:42.842493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-11-06 14:08:42.842500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-11-06 14:08:42.842734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-11-06 14:08:42.842741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-11-06 14:08:42.843000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-11-06 14:08:42.843007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-11-06 14:08:42.843314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-11-06 14:08:42.843321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-11-06 14:08:42.843608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-11-06 14:08:42.843614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-11-06 14:08:42.843918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-11-06 14:08:42.843925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-11-06 14:08:42.844238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-11-06 14:08:42.844247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-11-06 14:08:42.844539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-11-06 14:08:42.844546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-11-06 14:08:42.844840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-11-06 14:08:42.844846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.743 [2024-11-06 14:08:42.845036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-11-06 14:08:42.845042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-11-06 14:08:42.845363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-11-06 14:08:42.845370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-11-06 14:08:42.845645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-11-06 14:08:42.845652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-11-06 14:08:42.845949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-11-06 14:08:42.845956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-11-06 14:08:42.846286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-11-06 14:08:42.846293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-11-06 14:08:42.846620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-11-06 14:08:42.846627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-11-06 14:08:42.846915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-11-06 14:08:42.846922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-11-06 14:08:42.847226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-11-06 14:08:42.847232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-11-06 14:08:42.847586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-11-06 14:08:42.847594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-11-06 14:08:42.847769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-11-06 14:08:42.847776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-11-06 14:08:42.848069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-11-06 14:08:42.848076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-11-06 14:08:42.848370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-11-06 14:08:42.848377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-11-06 14:08:42.848684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-11-06 14:08:42.848691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-11-06 14:08:42.848995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-11-06 14:08:42.849002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-11-06 14:08:42.849283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-11-06 14:08:42.849290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-11-06 14:08:42.849625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-11-06 14:08:42.849632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-11-06 14:08:42.849924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-11-06 14:08:42.849930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-11-06 14:08:42.850221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-11-06 14:08:42.850229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-11-06 14:08:42.850546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-11-06 14:08:42.850553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-11-06 14:08:42.850816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-11-06 14:08:42.850823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-11-06 14:08:42.851008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-11-06 14:08:42.851015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-11-06 14:08:42.851215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-11-06 14:08:42.851222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-11-06 14:08:42.851512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-11-06 14:08:42.851519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-11-06 14:08:42.851803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-11-06 14:08:42.851809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-11-06 14:08:42.852115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-11-06 14:08:42.852122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-11-06 14:08:42.852406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-11-06 14:08:42.852413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-11-06 14:08:42.852754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-11-06 14:08:42.852760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-11-06 14:08:42.853048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-11-06 14:08:42.853055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-11-06 14:08:42.853358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-11-06 14:08:42.853366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-11-06 14:08:42.853647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-11-06 14:08:42.853654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-11-06 14:08:42.853942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-11-06 14:08:42.853949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-11-06 14:08:42.854252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-11-06 14:08:42.854259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-11-06 14:08:42.854445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-11-06 14:08:42.854453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-11-06 14:08:42.854714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-11-06 14:08:42.854721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-11-06 14:08:42.855037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-11-06 14:08:42.855044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-11-06 14:08:42.855379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-11-06 14:08:42.855386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-11-06 14:08:42.855564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-11-06 14:08:42.855571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-11-06 14:08:42.855730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-11-06 14:08:42.855737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-11-06 14:08:42.855902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-11-06 14:08:42.855909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-11-06 14:08:42.856270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-11-06 14:08:42.856277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-11-06 14:08:42.856579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-11-06 14:08:42.856586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-11-06 14:08:42.856904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-11-06 14:08:42.856910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-11-06 14:08:42.857196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-11-06 14:08:42.857203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-11-06 14:08:42.857554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-11-06 14:08:42.857561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-11-06 14:08:42.857871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-11-06 14:08:42.857878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-11-06 14:08:42.858182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-11-06 14:08:42.858189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-11-06 14:08:42.858444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-11-06 14:08:42.858451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-11-06 14:08:42.858826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-11-06 14:08:42.858832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-11-06 14:08:42.859137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-11-06 14:08:42.859144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-11-06 14:08:42.859545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-11-06 14:08:42.859552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-11-06 14:08:42.859791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-11-06 14:08:42.859798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-11-06 14:08:42.860101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-11-06 14:08:42.860108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-11-06 14:08:42.860417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-11-06 14:08:42.860424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-11-06 14:08:42.860727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-11-06 14:08:42.860733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-11-06 14:08:42.861019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-11-06 14:08:42.861026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-11-06 14:08:42.861317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-11-06 14:08:42.861324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-11-06 14:08:42.861637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-11-06 14:08:42.861643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-11-06 14:08:42.861850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-11-06 14:08:42.861859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-11-06 14:08:42.862167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-11-06 14:08:42.862174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-11-06 14:08:42.862482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-11-06 14:08:42.862489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-11-06 14:08:42.862781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-11-06 14:08:42.862787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-11-06 14:08:42.863084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-11-06 14:08:42.863091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-11-06 14:08:42.863398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-11-06 14:08:42.863406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-11-06 14:08:42.863605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-11-06 14:08:42.863612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-11-06 14:08:42.863979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-11-06 14:08:42.863985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-11-06 14:08:42.864264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-11-06 14:08:42.864271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-11-06 14:08:42.864472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-11-06 14:08:42.864479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-11-06 14:08:42.864788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-11-06 14:08:42.864794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-11-06 14:08:42.865093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-11-06 14:08:42.865100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-11-06 14:08:42.865326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-11-06 14:08:42.865333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-11-06 14:08:42.865615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-11-06 14:08:42.865622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-11-06 14:08:42.865931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-11-06 14:08:42.865937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-11-06 14:08:42.866222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-11-06 14:08:42.866229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.746 [2024-11-06 14:08:42.866513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-11-06 14:08:42.866520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-11-06 14:08:42.866851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-11-06 14:08:42.866858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-11-06 14:08:42.867188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-11-06 14:08:42.867195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-11-06 14:08:42.867505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-11-06 14:08:42.867513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-11-06 14:08:42.867848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-11-06 14:08:42.867854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-11-06 14:08:42.868012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-11-06 14:08:42.868019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-11-06 14:08:42.868340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-11-06 14:08:42.868347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-11-06 14:08:42.868614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-11-06 14:08:42.868621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-11-06 14:08:42.868928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-11-06 14:08:42.868935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-11-06 14:08:42.869227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-11-06 14:08:42.869234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-11-06 14:08:42.869601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-11-06 14:08:42.869608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-11-06 14:08:42.869919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-11-06 14:08:42.869926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-11-06 14:08:42.870217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-11-06 14:08:42.870224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-11-06 14:08:42.870566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-11-06 14:08:42.870574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-11-06 14:08:42.870904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-11-06 14:08:42.870911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-11-06 14:08:42.871196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-11-06 14:08:42.871203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-11-06 14:08:42.871515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-11-06 14:08:42.871522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-11-06 14:08:42.871817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-11-06 14:08:42.871824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-11-06 14:08:42.872109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-11-06 14:08:42.872116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-11-06 14:08:42.872403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-11-06 14:08:42.872410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.747 [2024-11-06 14:08:42.872704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-11-06 14:08:42.872710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-11-06 14:08:42.872995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-11-06 14:08:42.873001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-11-06 14:08:42.873334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-11-06 14:08:42.873341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-11-06 14:08:42.873644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-11-06 14:08:42.873651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-11-06 14:08:42.873940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-11-06 14:08:42.873948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-11-06 14:08:42.874231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-11-06 14:08:42.874238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-11-06 14:08:42.874550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-11-06 14:08:42.874557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-11-06 14:08:42.874936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-11-06 14:08:42.874943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-11-06 14:08:42.875226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-11-06 14:08:42.875233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-11-06 14:08:42.875442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-11-06 14:08:42.875450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-11-06 14:08:42.875755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-11-06 14:08:42.875762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-11-06 14:08:42.876088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-11-06 14:08:42.876094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-11-06 14:08:42.876392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-11-06 14:08:42.876399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-11-06 14:08:42.876698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-11-06 14:08:42.876704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-11-06 14:08:42.877002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-11-06 14:08:42.877009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-11-06 14:08:42.877312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-11-06 14:08:42.877319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-11-06 14:08:42.877601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-11-06 14:08:42.877608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-11-06 14:08:42.877909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-11-06 14:08:42.877917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-11-06 14:08:42.878207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-11-06 14:08:42.878214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-11-06 14:08:42.878505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-11-06 14:08:42.878512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-11-06 14:08:42.878788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-11-06 14:08:42.878795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-11-06 14:08:42.879127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-11-06 14:08:42.879134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-11-06 14:08:42.879415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-11-06 14:08:42.879422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-11-06 14:08:42.879726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-11-06 14:08:42.879733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-11-06 14:08:42.880081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-11-06 14:08:42.880088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-11-06 14:08:42.880383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-11-06 14:08:42.880390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-11-06 14:08:42.880712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-11-06 14:08:42.880718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.748 [2024-11-06 14:08:42.881009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-11-06 14:08:42.881016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-11-06 14:08:42.881312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-11-06 14:08:42.881319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-11-06 14:08:42.881603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-11-06 14:08:42.881610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-11-06 14:08:42.881782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-11-06 14:08:42.881789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-11-06 14:08:42.882074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-11-06 14:08:42.882080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-11-06 14:08:42.882387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-11-06 14:08:42.882394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-11-06 14:08:42.882697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-11-06 14:08:42.882704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-11-06 14:08:42.882771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-11-06 14:08:42.882778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-11-06 14:08:42.883087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-11-06 14:08:42.883094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-11-06 14:08:42.883383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-11-06 14:08:42.883390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-11-06 14:08:42.883732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-11-06 14:08:42.883739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-11-06 14:08:42.884035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-11-06 14:08:42.884043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-11-06 14:08:42.884327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-11-06 14:08:42.884334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-11-06 14:08:42.884635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-11-06 14:08:42.884643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-11-06 14:08:42.884894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-11-06 14:08:42.884901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-11-06 14:08:42.885205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-11-06 14:08:42.885212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-11-06 14:08:42.885405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-11-06 14:08:42.885412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-11-06 14:08:42.885684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-11-06 14:08:42.885692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-11-06 14:08:42.886001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-11-06 14:08:42.886008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-11-06 14:08:42.886321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-11-06 14:08:42.886328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-11-06 14:08:42.886528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-11-06 14:08:42.886535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-11-06 14:08:42.886847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-11-06 14:08:42.886854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-11-06 14:08:42.887146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-11-06 14:08:42.887152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-11-06 14:08:42.887445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-11-06 14:08:42.887452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-11-06 14:08:42.887760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-11-06 14:08:42.887766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-11-06 14:08:42.888134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-11-06 14:08:42.888140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-11-06 14:08:42.888341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-11-06 14:08:42.888349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.749 [2024-11-06 14:08:42.888670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-11-06 14:08:42.888677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-11-06 14:08:42.888941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-11-06 14:08:42.888947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-11-06 14:08:42.889290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-11-06 14:08:42.889297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-11-06 14:08:42.889602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-11-06 14:08:42.889609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-11-06 14:08:42.889903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-11-06 14:08:42.889909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-11-06 14:08:42.890207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-11-06 14:08:42.890214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-11-06 14:08:42.890508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-11-06 14:08:42.890515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-11-06 14:08:42.890811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-11-06 14:08:42.890818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-11-06 14:08:42.891112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-11-06 14:08:42.891119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-11-06 14:08:42.891414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-11-06 14:08:42.891421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-11-06 14:08:42.891740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-11-06 14:08:42.891746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-11-06 14:08:42.892029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-11-06 14:08:42.892036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-11-06 14:08:42.892375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-11-06 14:08:42.892382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-11-06 14:08:42.892647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-11-06 14:08:42.892654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-11-06 14:08:42.892961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-11-06 14:08:42.892967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-11-06 14:08:42.893135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-11-06 14:08:42.893143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-11-06 14:08:42.893413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-11-06 14:08:42.893420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-11-06 14:08:42.893743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-11-06 14:08:42.893750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-11-06 14:08:42.894029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-11-06 14:08:42.894035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-11-06 14:08:42.894321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-11-06 14:08:42.894329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-11-06 14:08:42.894627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-11-06 14:08:42.894634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-11-06 14:08:42.894946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-11-06 14:08:42.894953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-11-06 14:08:42.895242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-11-06 14:08:42.895252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-11-06 14:08:42.895548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-11-06 14:08:42.895554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-11-06 14:08:42.895745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-11-06 14:08:42.895753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-11-06 14:08:42.896099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-11-06 14:08:42.896106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-11-06 14:08:42.896450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-11-06 14:08:42.896457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-11-06 14:08:42.896746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-11-06 14:08:42.896753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-11-06 14:08:42.897042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-11-06 14:08:42.897049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-11-06 14:08:42.897394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-11-06 14:08:42.897401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-11-06 14:08:42.897699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-11-06 14:08:42.897708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-11-06 14:08:42.898001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-11-06 14:08:42.898008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-11-06 14:08:42.898298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-11-06 14:08:42.898305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-11-06 14:08:42.898605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-11-06 14:08:42.898612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-11-06 14:08:42.898913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-11-06 14:08:42.898920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-11-06 14:08:42.899227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-11-06 14:08:42.899234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-11-06 14:08:42.899545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-11-06 14:08:42.899553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-11-06 14:08:42.899833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-11-06 14:08:42.899839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-11-06 14:08:42.900146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-11-06 14:08:42.900153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-11-06 14:08:42.900424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-11-06 14:08:42.900431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-11-06 14:08:42.900717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-11-06 14:08:42.900724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-11-06 14:08:42.901012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-11-06 14:08:42.901018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-11-06 14:08:42.901309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-11-06 14:08:42.901316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-11-06 14:08:42.901612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-11-06 14:08:42.901619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-11-06 14:08:42.901923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-11-06 14:08:42.901930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-11-06 14:08:42.902231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-11-06 14:08:42.902238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-11-06 14:08:42.902540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-11-06 14:08:42.902548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-11-06 14:08:42.902835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-11-06 14:08:42.902842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-11-06 14:08:42.903209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-11-06 14:08:42.903216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-11-06 14:08:42.903497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-11-06 14:08:42.903504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-11-06 14:08:42.903846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-11-06 14:08:42.903852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-11-06 14:08:42.904139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-11-06 14:08:42.904145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-11-06 14:08:42.904436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-11-06 14:08:42.904443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-11-06 14:08:42.904755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-11-06 14:08:42.904762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-11-06 14:08:42.905047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-11-06 14:08:42.905054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-11-06 14:08:42.905347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-11-06 14:08:42.905354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-11-06 14:08:42.905702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-11-06 14:08:42.905709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-11-06 14:08:42.905897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-11-06 14:08:42.905904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-11-06 14:08:42.906188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-11-06 14:08:42.906195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-11-06 14:08:42.906506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-11-06 14:08:42.906513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-11-06 14:08:42.906807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-11-06 14:08:42.906813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-11-06 14:08:42.907102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-11-06 14:08:42.907109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-11-06 14:08:42.907394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-11-06 14:08:42.907401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-11-06 14:08:42.907698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-11-06 14:08:42.907704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-11-06 14:08:42.907989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-11-06 14:08:42.907996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-11-06 14:08:42.908288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-11-06 14:08:42.908295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-11-06 14:08:42.908577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-11-06 14:08:42.908584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-11-06 14:08:42.908869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-11-06 14:08:42.908876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-11-06 14:08:42.909162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-11-06 14:08:42.909169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-11-06 14:08:42.909476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-11-06 14:08:42.909483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-11-06 14:08:42.909809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-11-06 14:08:42.909817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-11-06 14:08:42.910104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-11-06 14:08:42.910111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-11-06 14:08:42.910410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-11-06 14:08:42.910417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-11-06 14:08:42.910728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-11-06 14:08:42.910735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-11-06 14:08:42.911046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-11-06 14:08:42.911053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-11-06 14:08:42.911359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-11-06 14:08:42.911367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-11-06 14:08:42.911560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-11-06 14:08:42.911567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-11-06 14:08:42.911869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-11-06 14:08:42.911876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-11-06 14:08:42.912171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-11-06 14:08:42.912178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-11-06 14:08:42.912512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-11-06 14:08:42.912520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-11-06 14:08:42.912804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-11-06 14:08:42.912811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-11-06 14:08:42.912978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-11-06 14:08:42.912985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-11-06 14:08:42.913288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-11-06 14:08:42.913295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-11-06 14:08:42.913574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-11-06 14:08:42.913581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.913862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.913869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.914165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.914172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.914491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.914498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.914788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.914795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.915084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.915091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.915306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.915313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.915615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.915621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.915793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.915800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.916142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.916149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.916436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.916443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.916727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.916734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.917089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.917096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.917411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.917418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.917708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.917715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.918031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.918038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.918329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.918336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.918633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.918640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.919007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.919014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.919207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.919213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.919480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.919487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.919799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.919806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.920022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.920028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.920215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.920222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.920507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.920515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.920824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.920831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.921114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.921122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.921405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.921414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.921713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.921720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.921884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.921891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.922167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.922174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.922339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.922347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.922636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.922642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.922932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.922938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.923305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.923312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.923616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.923623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.923899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.923905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.924189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.924196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.924513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.924520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-11-06 14:08:42.924822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-11-06 14:08:42.924829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.925126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.925133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.925315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.925323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.925504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.925512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.925820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.925827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.926105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.926111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.926418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.926425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.926731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.926737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.927029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.927036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.927421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.927428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.927713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.927719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.928044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.928051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.928354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.928362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.928645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.928653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.928954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.928961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.929260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.929267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.929604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.929611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.929934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.929941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.930272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.930279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.930448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.930455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.930763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.930770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.931076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.931083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.931422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.931429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.931714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.931721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.932044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.932051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.932355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.932363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.932564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.932571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.932830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.932837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.933041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.933050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.933413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.933420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.933737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.933744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.934089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.934095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.934392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.934398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.934708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.934714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.935006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.935012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.935188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.935194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.935487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.935494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.935789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.935796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.936110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.936117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.936420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.936428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.936744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.936750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.937088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.937094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.937457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.937464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.937747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.937753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.938052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.938059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.938362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.938369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.938719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.938725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.939011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.939018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.939303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.939311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.939462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.939470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.939772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.939780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.940082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.940089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.940418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.940426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.940738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-11-06 14:08:42.940745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-11-06 14:08:42.940930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.940937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.941271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.941278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.941603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.941609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.941926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.941932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.942198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.942205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.942512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.942519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.942692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.942700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.942979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.942987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.943275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.943282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.943598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.943606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.943791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.943798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.944074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.944081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.944365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.944373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.944654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.944661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.944963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.944972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.945175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.945183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.945451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.945459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.945743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.945750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.946043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.946050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.946396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.946403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.946779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.946787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.947073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.947080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.947356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.947363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.947666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.947673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.947966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.947973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.948304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.948312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.948607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.948615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.948922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.948929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.949235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.949241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.949551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.949559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.949836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.949844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.950144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.950151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.950445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.950452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.950740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.950746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.951029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.951036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.951311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.951318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.951626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.951633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.951920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.951927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.952288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.952296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.952489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.952496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.952690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.952698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.953017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.953025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.953217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.953224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.953499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.953506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.953848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.953855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.954152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.954160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.954462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.954469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.954775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.954783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.955069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.955076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.955358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.955365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.955554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.955561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.955827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.955833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.956153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.956159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.956360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.956367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.956710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.956719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-11-06 14:08:42.956973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-11-06 14:08:42.956980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.755 [2024-11-06 14:08:42.957178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-11-06 14:08:42.957184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-11-06 14:08:42.957463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-11-06 14:08:42.957470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-11-06 14:08:42.957757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-11-06 14:08:42.957764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-11-06 14:08:42.958055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-11-06 14:08:42.958063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-11-06 14:08:42.958350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-11-06 14:08:42.958358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-11-06 14:08:42.958732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-11-06 14:08:42.958739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-11-06 14:08:42.959028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-11-06 14:08:42.959035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-11-06 14:08:42.959216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-11-06 14:08:42.959223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-11-06 14:08:42.959513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-11-06 14:08:42.959520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-11-06 14:08:42.959904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-11-06 14:08:42.959911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-11-06 14:08:42.960090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-11-06 14:08:42.960097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-11-06 14:08:42.960394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-11-06 14:08:42.960401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-11-06 14:08:42.960754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-11-06 14:08:42.960761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-11-06 14:08:42.961046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-11-06 14:08:42.961054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-11-06 14:08:42.961395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-11-06 14:08:42.961403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-11-06 14:08:42.961686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-11-06 14:08:42.961693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-11-06 14:08:42.961880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-11-06 14:08:42.961887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-11-06 14:08:42.962171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-11-06 14:08:42.962178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-11-06 14:08:42.962465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-11-06 14:08:42.962472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-11-06 14:08:42.962655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-11-06 14:08:42.962663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-11-06 14:08:42.962848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-11-06 14:08:42.962855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-11-06 14:08:42.963160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-11-06 14:08:42.963167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-11-06 14:08:42.963333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-11-06 14:08:42.963341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-11-06 14:08:42.963545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-11-06 14:08:42.963552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-11-06 14:08:42.963812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-11-06 14:08:42.963819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-11-06 14:08:42.964059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-11-06 14:08:42.964066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-11-06 14:08:42.964376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-11-06 14:08:42.964383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-11-06 14:08:42.964520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-11-06 14:08:42.964529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-11-06 14:08:42.964701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2268020 is same with the state(6) to be set 00:25:03.755 Read completed with error (sct=0, sc=8) 00:25:03.755 starting I/O failed 00:25:03.755 Read completed with error (sct=0, sc=8) 00:25:03.755 starting I/O failed 00:25:03.755 Read completed with error (sct=0, sc=8) 00:25:03.755 starting I/O failed 00:25:03.755 Read completed with error (sct=0, sc=8) 00:25:03.755 starting I/O failed 00:25:03.755 Read completed with error (sct=0, sc=8) 00:25:03.755 starting I/O failed 00:25:03.755 Read completed with error (sct=0, sc=8) 00:25:03.755 starting I/O failed 00:25:03.755 Write completed with error (sct=0, sc=8) 00:25:03.755 starting I/O failed 00:25:03.755 Write completed with error (sct=0, sc=8) 00:25:03.755 starting I/O failed 00:25:03.755 Read completed with error (sct=0, sc=8) 00:25:03.755 starting I/O failed 00:25:03.755 Write completed with error (sct=0, sc=8) 00:25:03.755 starting I/O failed 00:25:03.755 Write completed with error (sct=0, sc=8) 00:25:03.755 starting I/O failed 00:25:03.755 Write completed with error (sct=0, sc=8) 00:25:03.755 starting I/O failed 00:25:03.755 Write completed with error (sct=0, sc=8) 00:25:03.755 starting I/O failed 00:25:03.755 Write completed with error (sct=0, sc=8) 00:25:03.755 starting I/O failed 00:25:03.755 Read completed with error (sct=0, sc=8) 00:25:03.755 starting I/O failed 00:25:03.755 Read completed with error (sct=0, sc=8) 00:25:03.755 starting I/O failed 00:25:03.755 Write completed with error (sct=0, sc=8) 00:25:03.755 starting I/O failed 00:25:03.755 Read completed with error (sct=0, sc=8) 00:25:03.755 starting I/O failed 00:25:03.755 Write completed with error (sct=0, sc=8) 00:25:03.755 starting I/O failed 00:25:03.755 Write completed with error (sct=0, sc=8) 00:25:03.755 starting I/O failed 00:25:03.755 Write completed with error (sct=0, sc=8) 00:25:03.755 starting I/O failed 00:25:03.755 Read completed with error (sct=0, sc=8) 00:25:03.755 starting I/O failed 00:25:03.755 Read completed with error (sct=0, sc=8) 00:25:03.755 starting I/O failed 00:25:03.755 Read completed with error (sct=0, sc=8) 00:25:03.755 starting I/O failed 00:25:03.755 Read completed with error (sct=0, sc=8) 00:25:03.755 starting I/O failed 00:25:03.755 Read completed with error (sct=0, sc=8) 00:25:03.755 starting I/O failed 00:25:03.755 Read completed with error (sct=0, sc=8) 00:25:03.755 starting I/O failed 00:25:03.755 Write completed with error (sct=0, sc=8) 00:25:03.755 starting I/O failed 00:25:03.755 Read completed with error (sct=0, sc=8) 00:25:03.755 starting I/O failed 00:25:03.755 Read completed with error (sct=0, sc=8) 00:25:03.755 starting I/O failed 00:25:03.755 Write completed with error (sct=0, sc=8) 00:25:03.755 starting I/O failed 00:25:03.755 Write completed with error (sct=0, sc=8) 00:25:03.755 starting I/O failed 00:25:03.755 [2024-11-06 14:08:42.965240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.755 [2024-11-06 14:08:42.965631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-11-06 14:08:42.965675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-11-06 14:08:42.965990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-11-06 14:08:42.966003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-11-06 14:08:42.966463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-11-06 14:08:42.966502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.756 [2024-11-06 14:08:42.966851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-11-06 14:08:42.966864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-11-06 14:08:42.967222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-11-06 14:08:42.967233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-11-06 14:08:42.967701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-11-06 14:08:42.967739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-11-06 14:08:42.968088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-11-06 14:08:42.968101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-11-06 14:08:42.968271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-11-06 14:08:42.968290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-11-06 14:08:42.968647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-11-06 14:08:42.968658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-11-06 14:08:42.968962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-11-06 14:08:42.968973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-11-06 14:08:42.969306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-11-06 14:08:42.969317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-11-06 14:08:42.969605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-11-06 14:08:42.969615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-11-06 14:08:42.969899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-11-06 14:08:42.969908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-11-06 14:08:42.970276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-11-06 14:08:42.970287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-11-06 14:08:42.970461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-11-06 14:08:42.970471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-11-06 14:08:42.970834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-11-06 14:08:42.970845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-11-06 14:08:42.971170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-11-06 14:08:42.971181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-11-06 14:08:42.971326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-11-06 14:08:42.971337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-11-06 14:08:42.972329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-11-06 14:08:42.972352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-11-06 14:08:42.972695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-11-06 14:08:42.972706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-11-06 14:08:42.973017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-11-06 14:08:42.973027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-11-06 14:08:42.973156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-11-06 14:08:42.973167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 Read completed with error (sct=0, sc=8) 00:25:03.756 starting I/O failed 00:25:03.756 Read completed with error (sct=0, sc=8) 00:25:03.756 starting I/O failed 00:25:03.756 Read completed with error (sct=0, sc=8) 00:25:03.756 starting I/O failed 00:25:03.756 Read completed with error (sct=0, sc=8) 00:25:03.756 starting I/O failed 00:25:03.756 Read completed with error (sct=0, sc=8) 00:25:03.756 starting I/O failed 00:25:03.756 Read completed with error (sct=0, sc=8) 00:25:03.756 starting I/O failed 00:25:03.756 Read completed with error (sct=0, sc=8) 00:25:03.756 starting I/O failed 00:25:03.756 Read completed with error (sct=0, sc=8) 00:25:03.756 starting I/O failed 00:25:03.756 Read completed with error (sct=0, sc=8) 00:25:03.756 starting I/O failed 00:25:03.756 Read completed with error (sct=0, sc=8) 00:25:03.756 starting I/O failed 00:25:03.756 Read completed with error (sct=0, sc=8) 00:25:03.756 starting I/O failed 00:25:03.756 Read completed with error (sct=0, sc=8) 00:25:03.756 starting I/O failed 00:25:03.756 Write completed with error (sct=0, sc=8) 00:25:03.756 starting I/O failed 00:25:03.756 Write completed with error (sct=0, sc=8) 00:25:03.756 starting I/O failed 00:25:03.756 Write completed with error (sct=0, sc=8) 00:25:03.756 starting I/O failed 00:25:03.756 Read completed with error (sct=0, sc=8) 00:25:03.756 starting I/O failed 00:25:03.756 Read completed with error (sct=0, sc=8) 00:25:03.756 starting I/O failed 00:25:03.756 Read completed with error (sct=0, sc=8) 00:25:03.756 starting I/O failed 00:25:03.756 Read completed with error (sct=0, sc=8) 00:25:03.756 starting I/O failed 00:25:03.756 Read completed with error (sct=0, sc=8) 00:25:03.756 starting I/O failed 00:25:03.756 Read completed with error (sct=0, sc=8) 00:25:03.756 starting I/O failed 00:25:03.756 Read completed with error (sct=0, sc=8) 00:25:03.756 starting I/O failed 00:25:03.756 Read completed with error (sct=0, sc=8) 00:25:03.756 starting I/O failed 00:25:03.756 Read completed with error (sct=0, sc=8) 00:25:03.756 starting I/O failed 00:25:03.756 Read completed with error (sct=0, sc=8) 00:25:03.756 starting I/O failed 00:25:03.756 Read completed with error (sct=0, sc=8) 00:25:03.756 starting I/O failed 00:25:03.756 Write completed with error (sct=0, sc=8) 00:25:03.756 starting I/O failed 00:25:03.756 Read completed with error (sct=0, sc=8) 00:25:03.756 starting I/O failed 00:25:03.756 Write completed with error (sct=0, sc=8) 00:25:03.756 starting I/O failed 00:25:03.756 Read completed with error (sct=0, sc=8) 00:25:03.756 starting I/O failed 00:25:03.756 Write completed with error (sct=0, sc=8) 00:25:03.756 starting I/O failed 00:25:03.756 Read completed with error (sct=0, sc=8) 00:25:03.756 starting I/O failed 00:25:03.756 [2024-11-06 14:08:42.973816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.756 [2024-11-06 14:08:42.974222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-11-06 14:08:42.974259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-11-06 14:08:42.974684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-11-06 14:08:42.974741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-11-06 14:08:42.974956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-11-06 14:08:42.974976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-11-06 14:08:42.975163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-11-06 14:08:42.975177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-11-06 14:08:42.975607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-11-06 14:08:42.975657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-11-06 14:08:42.975998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-11-06 14:08:42.976018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-11-06 14:08:42.976163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-11-06 14:08:42.976177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-11-06 14:08:42.976517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-11-06 14:08:42.976533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:04.031 [2024-11-06 14:08:42.976891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.031 [2024-11-06 14:08:42.976908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.031 qpair failed and we were unable to recover it. 00:25:04.031 [2024-11-06 14:08:42.977209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.031 [2024-11-06 14:08:42.977224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.031 qpair failed and we were unable to recover it. 00:25:04.031 [2024-11-06 14:08:42.977560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.031 [2024-11-06 14:08:42.977576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.031 qpair failed and we were unable to recover it. 00:25:04.031 [2024-11-06 14:08:42.977952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.031 [2024-11-06 14:08:42.977967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.031 qpair failed and we were unable to recover it. 00:25:04.031 [2024-11-06 14:08:42.978260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.031 [2024-11-06 14:08:42.978276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.031 qpair failed and we were unable to recover it. 00:25:04.031 [2024-11-06 14:08:42.978593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.031 [2024-11-06 14:08:42.978609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.031 qpair failed and we were unable to recover it. 00:25:04.031 [2024-11-06 14:08:42.978915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.031 [2024-11-06 14:08:42.978930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.031 qpair failed and we were unable to recover it. 00:25:04.031 [2024-11-06 14:08:42.979138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.031 [2024-11-06 14:08:42.979156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.031 qpair failed and we were unable to recover it. 00:25:04.031 [2024-11-06 14:08:42.979464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.031 [2024-11-06 14:08:42.979472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.031 qpair failed and we were unable to recover it. 00:25:04.031 [2024-11-06 14:08:42.979793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.031 [2024-11-06 14:08:42.979800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.031 qpair failed and we were unable to recover it. 00:25:04.031 [2024-11-06 14:08:42.980021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.031 [2024-11-06 14:08:42.980028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.031 qpair failed and we were unable to recover it. 00:25:04.031 [2024-11-06 14:08:42.980352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.031 [2024-11-06 14:08:42.980359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.031 qpair failed and we were unable to recover it. 00:25:04.031 [2024-11-06 14:08:42.980550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.031 [2024-11-06 14:08:42.980557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.031 qpair failed and we were unable to recover it. 00:25:04.031 [2024-11-06 14:08:42.980870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.031 [2024-11-06 14:08:42.980877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.031 qpair failed and we were unable to recover it. 00:25:04.031 [2024-11-06 14:08:42.981037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.031 [2024-11-06 14:08:42.981045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.031 qpair failed and we were unable to recover it. 00:25:04.031 [2024-11-06 14:08:42.981316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.031 [2024-11-06 14:08:42.981324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.031 qpair failed and we were unable to recover it. 00:25:04.031 [2024-11-06 14:08:42.981634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.031 [2024-11-06 14:08:42.981641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.031 qpair failed and we were unable to recover it. 00:25:04.031 [2024-11-06 14:08:42.981799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.031 [2024-11-06 14:08:42.981807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.031 qpair failed and we were unable to recover it. 00:25:04.031 [2024-11-06 14:08:42.982093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.031 [2024-11-06 14:08:42.982100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.031 qpair failed and we were unable to recover it. 00:25:04.031 [2024-11-06 14:08:42.982302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.031 [2024-11-06 14:08:42.982310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.031 qpair failed and we were unable to recover it. 00:25:04.031 [2024-11-06 14:08:42.982646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.031 [2024-11-06 14:08:42.982653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.031 qpair failed and we were unable to recover it. 00:25:04.031 [2024-11-06 14:08:42.982925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.031 [2024-11-06 14:08:42.982933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.031 qpair failed and we were unable to recover it. 00:25:04.031 [2024-11-06 14:08:42.983224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.031 [2024-11-06 14:08:42.983231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.031 qpair failed and we were unable to recover it. 00:25:04.031 [2024-11-06 14:08:42.983534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.031 [2024-11-06 14:08:42.983541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.031 qpair failed and we were unable to recover it. 00:25:04.031 [2024-11-06 14:08:42.983886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.031 [2024-11-06 14:08:42.983893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.031 qpair failed and we were unable to recover it. 00:25:04.031 [2024-11-06 14:08:42.984217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.031 [2024-11-06 14:08:42.984224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.031 qpair failed and we were unable to recover it. 00:25:04.031 [2024-11-06 14:08:42.984551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.031 [2024-11-06 14:08:42.984558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.031 qpair failed and we were unable to recover it. 00:25:04.031 [2024-11-06 14:08:42.984892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.031 [2024-11-06 14:08:42.984899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.031 qpair failed and we were unable to recover it. 00:25:04.031 [2024-11-06 14:08:42.985219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.031 [2024-11-06 14:08:42.985226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.031 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.985546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.985553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.985727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.985734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.986005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.986013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.986328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.986335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.986539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.986548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.986738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.986746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.987054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.987062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.987371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.987378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.987669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.987677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.987996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.988003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.988199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.988206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.988383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.988391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.988752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.988758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.989049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.989056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.989377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.989385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.989668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.989675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.989943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.989951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.990137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.990143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.990321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.990328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.990598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.990605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.990899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.990906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.991209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.991216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.991539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.991547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.991731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.991739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.992002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.992009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.992351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.992358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.992667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.992673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.993002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.993009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.993347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.993354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.993703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.993710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.993911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.993919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.994221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.994229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.994418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.994441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.994717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.994724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.995048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.995055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.995275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.995283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.995581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.995588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.032 [2024-11-06 14:08:42.995881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.032 [2024-11-06 14:08:42.995888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.032 qpair failed and we were unable to recover it. 00:25:04.033 [2024-11-06 14:08:42.996198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.033 [2024-11-06 14:08:42.996206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.033 qpair failed and we were unable to recover it. 00:25:04.033 [2024-11-06 14:08:42.996477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.033 [2024-11-06 14:08:42.996485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.033 qpair failed and we were unable to recover it. 00:25:04.033 [2024-11-06 14:08:42.996775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.033 [2024-11-06 14:08:42.996782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.033 qpair failed and we were unable to recover it. 00:25:04.033 [2024-11-06 14:08:42.997071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.033 [2024-11-06 14:08:42.997079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.033 qpair failed and we were unable to recover it. 00:25:04.033 [2024-11-06 14:08:42.997369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.033 [2024-11-06 14:08:42.997376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.033 qpair failed and we were unable to recover it. 00:25:04.033 [2024-11-06 14:08:42.997512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.033 [2024-11-06 14:08:42.997519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.033 qpair failed and we were unable to recover it. 00:25:04.033 [2024-11-06 14:08:42.997785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.033 [2024-11-06 14:08:42.997794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.033 qpair failed and we were unable to recover it. 00:25:04.033 [2024-11-06 14:08:42.997987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.033 [2024-11-06 14:08:42.997994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.033 qpair failed and we were unable to recover it. 00:25:04.033 [2024-11-06 14:08:42.998260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.033 [2024-11-06 14:08:42.998267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.033 qpair failed and we were unable to recover it. 00:25:04.033 [2024-11-06 14:08:42.998582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.033 [2024-11-06 14:08:42.998589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.033 qpair failed and we were unable to recover it. 00:25:04.033 [2024-11-06 14:08:42.998921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.033 [2024-11-06 14:08:42.998928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.033 qpair failed and we were unable to recover it. 00:25:04.033 [2024-11-06 14:08:42.999075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.033 [2024-11-06 14:08:42.999081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.033 qpair failed and we were unable to recover it. 00:25:04.033 [2024-11-06 14:08:42.999285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.033 [2024-11-06 14:08:42.999292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.033 qpair failed and we were unable to recover it. 00:25:04.033 [2024-11-06 14:08:42.999655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.033 [2024-11-06 14:08:42.999662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.033 qpair failed and we were unable to recover it. 00:25:04.033 [2024-11-06 14:08:42.999968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.033 [2024-11-06 14:08:42.999975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.033 qpair failed and we were unable to recover it. 00:25:04.033 [2024-11-06 14:08:43.000249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.033 [2024-11-06 14:08:43.000257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.033 qpair failed and we were unable to recover it. 00:25:04.033 [2024-11-06 14:08:43.000321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.033 [2024-11-06 14:08:43.000329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.033 qpair failed and we were unable to recover it. 00:25:04.033 [2024-11-06 14:08:43.000610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.033 [2024-11-06 14:08:43.000617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.033 qpair failed and we were unable to recover it. 00:25:04.033 [2024-11-06 14:08:43.000924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.033 [2024-11-06 14:08:43.000931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.033 qpair failed and we were unable to recover it. 00:25:04.033 [2024-11-06 14:08:43.001283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.033 [2024-11-06 14:08:43.001291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.033 qpair failed and we were unable to recover it. 00:25:04.033 [2024-11-06 14:08:43.001481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.033 [2024-11-06 14:08:43.001488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.033 qpair failed and we were unable to recover it. 00:25:04.033 [2024-11-06 14:08:43.001828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.033 [2024-11-06 14:08:43.001835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.033 qpair failed and we were unable to recover it. 00:25:04.033 [2024-11-06 14:08:43.002138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.033 [2024-11-06 14:08:43.002145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.033 qpair failed and we were unable to recover it. 00:25:04.033 [2024-11-06 14:08:43.002515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.033 [2024-11-06 14:08:43.002523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.033 qpair failed and we were unable to recover it. 00:25:04.033 [2024-11-06 14:08:43.002850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.033 [2024-11-06 14:08:43.002857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.033 qpair failed and we were unable to recover it. 00:25:04.033 [2024-11-06 14:08:43.003152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.033 [2024-11-06 14:08:43.003159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.033 qpair failed and we were unable to recover it. 00:25:04.033 [2024-11-06 14:08:43.003353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.033 [2024-11-06 14:08:43.003360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.033 qpair failed and we were unable to recover it. 00:25:04.033 [2024-11-06 14:08:43.003665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.033 [2024-11-06 14:08:43.003672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.033 qpair failed and we were unable to recover it. 00:25:04.033 [2024-11-06 14:08:43.003833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.033 [2024-11-06 14:08:43.003841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.033 qpair failed and we were unable to recover it. 00:25:04.033 [2024-11-06 14:08:43.004111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.033 [2024-11-06 14:08:43.004119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.033 qpair failed and we were unable to recover it. 00:25:04.033 [2024-11-06 14:08:43.004439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.033 [2024-11-06 14:08:43.004446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.033 qpair failed and we were unable to recover it. 00:25:04.033 [2024-11-06 14:08:43.004756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.033 [2024-11-06 14:08:43.004763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.033 qpair failed and we were unable to recover it. 00:25:04.033 [2024-11-06 14:08:43.004968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.033 [2024-11-06 14:08:43.004975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.033 qpair failed and we were unable to recover it. 00:25:04.033 [2024-11-06 14:08:43.005311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.033 [2024-11-06 14:08:43.005318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.033 qpair failed and we were unable to recover it. 00:25:04.033 [2024-11-06 14:08:43.005587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.033 [2024-11-06 14:08:43.005594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.033 qpair failed and we were unable to recover it. 00:25:04.033 [2024-11-06 14:08:43.005887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.033 [2024-11-06 14:08:43.005895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.033 qpair failed and we were unable to recover it. 00:25:04.033 [2024-11-06 14:08:43.006000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.034 [2024-11-06 14:08:43.006008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.034 qpair failed and we were unable to recover it. 00:25:04.034 [2024-11-06 14:08:43.006322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.034 [2024-11-06 14:08:43.006329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.034 qpair failed and we were unable to recover it. 00:25:04.034 [2024-11-06 14:08:43.006489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.034 [2024-11-06 14:08:43.006496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.034 qpair failed and we were unable to recover it. 00:25:04.034 [2024-11-06 14:08:43.006731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.034 [2024-11-06 14:08:43.006738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.034 qpair failed and we were unable to recover it. 00:25:04.034 [2024-11-06 14:08:43.007086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.034 [2024-11-06 14:08:43.007093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.034 qpair failed and we were unable to recover it. 00:25:04.034 [2024-11-06 14:08:43.007389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.034 [2024-11-06 14:08:43.007396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.034 qpair failed and we were unable to recover it. 00:25:04.034 [2024-11-06 14:08:43.007703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.034 [2024-11-06 14:08:43.007711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.034 qpair failed and we were unable to recover it. 00:25:04.034 [2024-11-06 14:08:43.008080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.034 [2024-11-06 14:08:43.008086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.034 qpair failed and we were unable to recover it. 00:25:04.034 [2024-11-06 14:08:43.008295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.034 [2024-11-06 14:08:43.008302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.034 qpair failed and we were unable to recover it. 00:25:04.034 [2024-11-06 14:08:43.008491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.034 [2024-11-06 14:08:43.008498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.034 qpair failed and we were unable to recover it. 00:25:04.034 [2024-11-06 14:08:43.008720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.034 [2024-11-06 14:08:43.008729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.034 qpair failed and we were unable to recover it. 00:25:04.034 [2024-11-06 14:08:43.009016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.034 [2024-11-06 14:08:43.009023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.034 qpair failed and we were unable to recover it. 00:25:04.034 [2024-11-06 14:08:43.009321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.034 [2024-11-06 14:08:43.009329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.034 qpair failed and we were unable to recover it. 00:25:04.034 [2024-11-06 14:08:43.009596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.034 [2024-11-06 14:08:43.009603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.034 qpair failed and we were unable to recover it. 00:25:04.034 [2024-11-06 14:08:43.009880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.034 [2024-11-06 14:08:43.009887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.034 qpair failed and we were unable to recover it. 00:25:04.034 [2024-11-06 14:08:43.010123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.034 [2024-11-06 14:08:43.010130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.034 qpair failed and we were unable to recover it. 00:25:04.034 [2024-11-06 14:08:43.010421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.034 [2024-11-06 14:08:43.010428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.034 qpair failed and we were unable to recover it. 00:25:04.034 [2024-11-06 14:08:43.010651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.034 [2024-11-06 14:08:43.010658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.034 qpair failed and we were unable to recover it. 00:25:04.034 [2024-11-06 14:08:43.010961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.034 [2024-11-06 14:08:43.010968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.034 qpair failed and we were unable to recover it. 00:25:04.034 [2024-11-06 14:08:43.011279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.034 [2024-11-06 14:08:43.011286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.034 qpair failed and we were unable to recover it. 00:25:04.034 [2024-11-06 14:08:43.011482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.034 [2024-11-06 14:08:43.011489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.034 qpair failed and we were unable to recover it. 00:25:04.034 [2024-11-06 14:08:43.011779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.034 [2024-11-06 14:08:43.011786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.034 qpair failed and we were unable to recover it. 00:25:04.034 [2024-11-06 14:08:43.011960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.034 [2024-11-06 14:08:43.011967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.034 qpair failed and we were unable to recover it. 00:25:04.034 [2024-11-06 14:08:43.012145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.034 [2024-11-06 14:08:43.012152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.034 qpair failed and we were unable to recover it. 00:25:04.034 [2024-11-06 14:08:43.012510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.034 [2024-11-06 14:08:43.012517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.034 qpair failed and we were unable to recover it. 00:25:04.034 [2024-11-06 14:08:43.012720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.034 [2024-11-06 14:08:43.012727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.034 qpair failed and we were unable to recover it. 00:25:04.034 [2024-11-06 14:08:43.013054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.034 [2024-11-06 14:08:43.013060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.034 qpair failed and we were unable to recover it. 00:25:04.034 [2024-11-06 14:08:43.013443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.034 [2024-11-06 14:08:43.013450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.034 qpair failed and we were unable to recover it. 00:25:04.034 [2024-11-06 14:08:43.013758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.034 [2024-11-06 14:08:43.013764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.034 qpair failed and we were unable to recover it. 00:25:04.034 [2024-11-06 14:08:43.014096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.034 [2024-11-06 14:08:43.014103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.034 qpair failed and we were unable to recover it. 00:25:04.034 [2024-11-06 14:08:43.014409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.034 [2024-11-06 14:08:43.014416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.034 qpair failed and we were unable to recover it. 00:25:04.034 [2024-11-06 14:08:43.014590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.034 [2024-11-06 14:08:43.014597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.034 qpair failed and we were unable to recover it. 00:25:04.034 [2024-11-06 14:08:43.014876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.034 [2024-11-06 14:08:43.014883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.034 qpair failed and we were unable to recover it. 00:25:04.034 [2024-11-06 14:08:43.015192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.034 [2024-11-06 14:08:43.015199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.034 qpair failed and we were unable to recover it. 00:25:04.034 [2024-11-06 14:08:43.015472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.034 [2024-11-06 14:08:43.015479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.034 qpair failed and we were unable to recover it. 00:25:04.035 [2024-11-06 14:08:43.015780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.035 [2024-11-06 14:08:43.015786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.035 qpair failed and we were unable to recover it. 00:25:04.035 [2024-11-06 14:08:43.015972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.035 [2024-11-06 14:08:43.015979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.035 qpair failed and we were unable to recover it. 00:25:04.035 [2024-11-06 14:08:43.016187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.035 [2024-11-06 14:08:43.016194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.035 qpair failed and we were unable to recover it. 00:25:04.035 [2024-11-06 14:08:43.016503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.035 [2024-11-06 14:08:43.016510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.035 qpair failed and we were unable to recover it. 00:25:04.035 [2024-11-06 14:08:43.016844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.035 [2024-11-06 14:08:43.016851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.035 qpair failed and we were unable to recover it. 00:25:04.035 [2024-11-06 14:08:43.017225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.035 [2024-11-06 14:08:43.017232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.035 qpair failed and we were unable to recover it. 00:25:04.035 [2024-11-06 14:08:43.017541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.035 [2024-11-06 14:08:43.017549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.035 qpair failed and we were unable to recover it. 00:25:04.035 [2024-11-06 14:08:43.017874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.035 [2024-11-06 14:08:43.017881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.035 qpair failed and we were unable to recover it. 00:25:04.035 [2024-11-06 14:08:43.018090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.035 [2024-11-06 14:08:43.018097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.035 qpair failed and we were unable to recover it. 00:25:04.035 [2024-11-06 14:08:43.018368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.035 [2024-11-06 14:08:43.018376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.035 qpair failed and we were unable to recover it. 00:25:04.035 [2024-11-06 14:08:43.018606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.035 [2024-11-06 14:08:43.018612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.035 qpair failed and we were unable to recover it. 00:25:04.035 [2024-11-06 14:08:43.018941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.035 [2024-11-06 14:08:43.018948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.035 qpair failed and we were unable to recover it. 00:25:04.035 [2024-11-06 14:08:43.019239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.035 [2024-11-06 14:08:43.019248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.035 qpair failed and we were unable to recover it. 00:25:04.035 [2024-11-06 14:08:43.019488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.035 [2024-11-06 14:08:43.019494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.035 qpair failed and we were unable to recover it. 00:25:04.035 [2024-11-06 14:08:43.019821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.035 [2024-11-06 14:08:43.019828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.035 qpair failed and we were unable to recover it. 00:25:04.035 [2024-11-06 14:08:43.020147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.035 [2024-11-06 14:08:43.020156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.035 qpair failed and we were unable to recover it. 00:25:04.035 [2024-11-06 14:08:43.020427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.035 [2024-11-06 14:08:43.020433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.035 qpair failed and we were unable to recover it. 00:25:04.035 [2024-11-06 14:08:43.020699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.035 [2024-11-06 14:08:43.020706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.035 qpair failed and we were unable to recover it. 00:25:04.035 [2024-11-06 14:08:43.021019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.035 [2024-11-06 14:08:43.021025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.035 qpair failed and we were unable to recover it. 00:25:04.035 [2024-11-06 14:08:43.021347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.035 [2024-11-06 14:08:43.021354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.035 qpair failed and we were unable to recover it. 00:25:04.035 [2024-11-06 14:08:43.021407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.035 [2024-11-06 14:08:43.021413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.035 qpair failed and we were unable to recover it. 00:25:04.035 [2024-11-06 14:08:43.021687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.035 [2024-11-06 14:08:43.021693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.035 qpair failed and we were unable to recover it. 00:25:04.035 [2024-11-06 14:08:43.022101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.035 [2024-11-06 14:08:43.022108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.035 qpair failed and we were unable to recover it. 00:25:04.035 [2024-11-06 14:08:43.022419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.035 [2024-11-06 14:08:43.022427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.035 qpair failed and we were unable to recover it. 00:25:04.035 [2024-11-06 14:08:43.022736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.035 [2024-11-06 14:08:43.022742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.035 qpair failed and we were unable to recover it. 00:25:04.035 [2024-11-06 14:08:43.022910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.035 [2024-11-06 14:08:43.022917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.035 qpair failed and we were unable to recover it. 00:25:04.035 [2024-11-06 14:08:43.023289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.035 [2024-11-06 14:08:43.023296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.035 qpair failed and we were unable to recover it. 00:25:04.035 [2024-11-06 14:08:43.023555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.035 [2024-11-06 14:08:43.023562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.035 qpair failed and we were unable to recover it. 00:25:04.035 [2024-11-06 14:08:43.023847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.035 [2024-11-06 14:08:43.023854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.035 qpair failed and we were unable to recover it. 00:25:04.035 [2024-11-06 14:08:43.024146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.035 [2024-11-06 14:08:43.024153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.035 qpair failed and we were unable to recover it. 00:25:04.035 [2024-11-06 14:08:43.024485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.035 [2024-11-06 14:08:43.024492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.035 qpair failed and we were unable to recover it. 00:25:04.036 [2024-11-06 14:08:43.024725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.036 [2024-11-06 14:08:43.024732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.036 qpair failed and we were unable to recover it. 00:25:04.036 [2024-11-06 14:08:43.025070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.036 [2024-11-06 14:08:43.025077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.036 qpair failed and we were unable to recover it. 00:25:04.036 [2024-11-06 14:08:43.025369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.036 [2024-11-06 14:08:43.025376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.036 qpair failed and we were unable to recover it. 00:25:04.036 [2024-11-06 14:08:43.025727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.036 [2024-11-06 14:08:43.025733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.036 qpair failed and we were unable to recover it. 00:25:04.036 [2024-11-06 14:08:43.026007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.036 [2024-11-06 14:08:43.026014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.036 qpair failed and we were unable to recover it. 00:25:04.036 [2024-11-06 14:08:43.026302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.036 [2024-11-06 14:08:43.026309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.036 qpair failed and we were unable to recover it. 00:25:04.036 [2024-11-06 14:08:43.026589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.036 [2024-11-06 14:08:43.026596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.036 qpair failed and we were unable to recover it. 00:25:04.036 [2024-11-06 14:08:43.026999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.036 [2024-11-06 14:08:43.027006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.036 qpair failed and we were unable to recover it. 00:25:04.036 [2024-11-06 14:08:43.027313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.036 [2024-11-06 14:08:43.027320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.036 qpair failed and we were unable to recover it. 00:25:04.036 [2024-11-06 14:08:43.027459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.036 [2024-11-06 14:08:43.027465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.036 qpair failed and we were unable to recover it. 00:25:04.036 [2024-11-06 14:08:43.027836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.036 [2024-11-06 14:08:43.027843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.036 qpair failed and we were unable to recover it. 00:25:04.036 [2024-11-06 14:08:43.028006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.036 [2024-11-06 14:08:43.028013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.036 qpair failed and we were unable to recover it. 00:25:04.036 [2024-11-06 14:08:43.028356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.036 [2024-11-06 14:08:43.028363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.036 qpair failed and we were unable to recover it. 00:25:04.036 [2024-11-06 14:08:43.028653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.036 [2024-11-06 14:08:43.028660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.036 qpair failed and we were unable to recover it. 00:25:04.036 [2024-11-06 14:08:43.028848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.036 [2024-11-06 14:08:43.028855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.036 qpair failed and we were unable to recover it. 00:25:04.036 [2024-11-06 14:08:43.029136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.036 [2024-11-06 14:08:43.029143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.036 qpair failed and we were unable to recover it. 00:25:04.036 [2024-11-06 14:08:43.029465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.036 [2024-11-06 14:08:43.029472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.036 qpair failed and we were unable to recover it. 00:25:04.036 [2024-11-06 14:08:43.029767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.036 [2024-11-06 14:08:43.029773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.036 qpair failed and we were unable to recover it. 00:25:04.036 [2024-11-06 14:08:43.029977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.036 [2024-11-06 14:08:43.029984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.036 qpair failed and we were unable to recover it. 00:25:04.036 [2024-11-06 14:08:43.030287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.036 [2024-11-06 14:08:43.030295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.036 qpair failed and we were unable to recover it. 00:25:04.036 [2024-11-06 14:08:43.030582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.036 [2024-11-06 14:08:43.030589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.036 qpair failed and we were unable to recover it. 00:25:04.036 [2024-11-06 14:08:43.030734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.036 [2024-11-06 14:08:43.030741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.036 qpair failed and we were unable to recover it. 00:25:04.036 [2024-11-06 14:08:43.030972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.036 [2024-11-06 14:08:43.030979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.036 qpair failed and we were unable to recover it. 00:25:04.036 [2024-11-06 14:08:43.031149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.036 [2024-11-06 14:08:43.031156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.036 qpair failed and we were unable to recover it. 00:25:04.036 [2024-11-06 14:08:43.031368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.036 [2024-11-06 14:08:43.031379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.036 qpair failed and we were unable to recover it. 00:25:04.036 [2024-11-06 14:08:43.031572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.036 [2024-11-06 14:08:43.031578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.036 qpair failed and we were unable to recover it. 00:25:04.036 [2024-11-06 14:08:43.031870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.036 [2024-11-06 14:08:43.031877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.036 qpair failed and we were unable to recover it. 00:25:04.036 [2024-11-06 14:08:43.032174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.036 [2024-11-06 14:08:43.032181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.036 qpair failed and we were unable to recover it. 00:25:04.036 [2024-11-06 14:08:43.032381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.036 [2024-11-06 14:08:43.032388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.036 qpair failed and we were unable to recover it. 00:25:04.036 [2024-11-06 14:08:43.032682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.036 [2024-11-06 14:08:43.032690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.036 qpair failed and we were unable to recover it. 00:25:04.036 [2024-11-06 14:08:43.032970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.036 [2024-11-06 14:08:43.032978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.036 qpair failed and we were unable to recover it. 00:25:04.036 [2024-11-06 14:08:43.033269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.036 [2024-11-06 14:08:43.033276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.036 qpair failed and we were unable to recover it. 00:25:04.036 [2024-11-06 14:08:43.033521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.036 [2024-11-06 14:08:43.033528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.036 qpair failed and we were unable to recover it. 00:25:04.036 [2024-11-06 14:08:43.033819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.036 [2024-11-06 14:08:43.033826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.037 qpair failed and we were unable to recover it. 00:25:04.037 [2024-11-06 14:08:43.034002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.037 [2024-11-06 14:08:43.034009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.037 qpair failed and we were unable to recover it. 00:25:04.037 [2024-11-06 14:08:43.034321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.037 [2024-11-06 14:08:43.034328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.037 qpair failed and we were unable to recover it. 00:25:04.037 [2024-11-06 14:08:43.034639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.037 [2024-11-06 14:08:43.034646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.037 qpair failed and we were unable to recover it. 00:25:04.037 [2024-11-06 14:08:43.034960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.037 [2024-11-06 14:08:43.034966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.037 qpair failed and we were unable to recover it. 00:25:04.037 [2024-11-06 14:08:43.035272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.037 [2024-11-06 14:08:43.035279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.037 qpair failed and we were unable to recover it. 00:25:04.037 [2024-11-06 14:08:43.035551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.037 [2024-11-06 14:08:43.035558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.037 qpair failed and we were unable to recover it. 00:25:04.037 [2024-11-06 14:08:43.035744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.037 [2024-11-06 14:08:43.035751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.037 qpair failed and we were unable to recover it. 00:25:04.037 [2024-11-06 14:08:43.036027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.037 [2024-11-06 14:08:43.036034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.037 qpair failed and we were unable to recover it. 00:25:04.037 [2024-11-06 14:08:43.036206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.037 [2024-11-06 14:08:43.036213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.037 qpair failed and we were unable to recover it. 00:25:04.037 [2024-11-06 14:08:43.036478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.037 [2024-11-06 14:08:43.036486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.037 qpair failed and we were unable to recover it. 00:25:04.037 [2024-11-06 14:08:43.036761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.037 [2024-11-06 14:08:43.036769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.037 qpair failed and we were unable to recover it. 00:25:04.037 [2024-11-06 14:08:43.037036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.037 [2024-11-06 14:08:43.037044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.037 qpair failed and we were unable to recover it. 00:25:04.037 [2024-11-06 14:08:43.037345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.037 [2024-11-06 14:08:43.037352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.037 qpair failed and we were unable to recover it. 00:25:04.037 [2024-11-06 14:08:43.037667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.037 [2024-11-06 14:08:43.037675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.037 qpair failed and we were unable to recover it. 00:25:04.037 [2024-11-06 14:08:43.037960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.037 [2024-11-06 14:08:43.037967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.037 qpair failed and we were unable to recover it. 00:25:04.037 [2024-11-06 14:08:43.038261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.037 [2024-11-06 14:08:43.038268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.037 qpair failed and we were unable to recover it. 00:25:04.037 [2024-11-06 14:08:43.038623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.037 [2024-11-06 14:08:43.038630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.037 qpair failed and we were unable to recover it. 00:25:04.037 [2024-11-06 14:08:43.038810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.037 [2024-11-06 14:08:43.038817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.037 qpair failed and we were unable to recover it. 00:25:04.037 [2024-11-06 14:08:43.039093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.037 [2024-11-06 14:08:43.039100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.037 qpair failed and we were unable to recover it. 00:25:04.037 [2024-11-06 14:08:43.039412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.037 [2024-11-06 14:08:43.039419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.037 qpair failed and we were unable to recover it. 00:25:04.037 [2024-11-06 14:08:43.039588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.037 [2024-11-06 14:08:43.039595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.037 qpair failed and we were unable to recover it. 00:25:04.037 [2024-11-06 14:08:43.039857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.037 [2024-11-06 14:08:43.039863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.037 qpair failed and we were unable to recover it. 00:25:04.037 [2024-11-06 14:08:43.040155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.037 [2024-11-06 14:08:43.040162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.037 qpair failed and we were unable to recover it. 00:25:04.037 [2024-11-06 14:08:43.040454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.037 [2024-11-06 14:08:43.040461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.037 qpair failed and we were unable to recover it. 00:25:04.037 [2024-11-06 14:08:43.040856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.037 [2024-11-06 14:08:43.040863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.037 qpair failed and we were unable to recover it. 00:25:04.037 [2024-11-06 14:08:43.041156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.037 [2024-11-06 14:08:43.041164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.037 qpair failed and we were unable to recover it. 00:25:04.037 [2024-11-06 14:08:43.041472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.037 [2024-11-06 14:08:43.041479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.037 qpair failed and we were unable to recover it. 00:25:04.037 [2024-11-06 14:08:43.041816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.037 [2024-11-06 14:08:43.041823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.037 qpair failed and we were unable to recover it. 00:25:04.037 [2024-11-06 14:08:43.042035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.037 [2024-11-06 14:08:43.042041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.037 qpair failed and we were unable to recover it. 00:25:04.037 [2024-11-06 14:08:43.042339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.037 [2024-11-06 14:08:43.042346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.037 qpair failed and we were unable to recover it. 00:25:04.037 [2024-11-06 14:08:43.042678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.037 [2024-11-06 14:08:43.042687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.037 qpair failed and we were unable to recover it. 00:25:04.037 [2024-11-06 14:08:43.042951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.037 [2024-11-06 14:08:43.042957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.037 qpair failed and we were unable to recover it. 00:25:04.037 [2024-11-06 14:08:43.043270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.037 [2024-11-06 14:08:43.043277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.037 qpair failed and we were unable to recover it. 00:25:04.037 [2024-11-06 14:08:43.043469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.037 [2024-11-06 14:08:43.043476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.037 qpair failed and we were unable to recover it. 00:25:04.037 [2024-11-06 14:08:43.043777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.037 [2024-11-06 14:08:43.043784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.037 qpair failed and we were unable to recover it. 00:25:04.037 [2024-11-06 14:08:43.044060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.037 [2024-11-06 14:08:43.044067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.037 qpair failed and we were unable to recover it. 00:25:04.038 [2024-11-06 14:08:43.044370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.038 [2024-11-06 14:08:43.044378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.038 qpair failed and we were unable to recover it. 00:25:04.038 [2024-11-06 14:08:43.044674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.038 [2024-11-06 14:08:43.044681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.038 qpair failed and we were unable to recover it. 00:25:04.038 [2024-11-06 14:08:43.044973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.038 [2024-11-06 14:08:43.044980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.038 qpair failed and we were unable to recover it. 00:25:04.038 [2024-11-06 14:08:43.045369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.038 [2024-11-06 14:08:43.045376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.038 qpair failed and we were unable to recover it. 00:25:04.038 [2024-11-06 14:08:43.045681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.038 [2024-11-06 14:08:43.045688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.038 qpair failed and we were unable to recover it. 00:25:04.038 [2024-11-06 14:08:43.045980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.038 [2024-11-06 14:08:43.045987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.038 qpair failed and we were unable to recover it. 00:25:04.038 [2024-11-06 14:08:43.046301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.038 [2024-11-06 14:08:43.046308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.038 qpair failed and we were unable to recover it. 00:25:04.038 [2024-11-06 14:08:43.046623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.038 [2024-11-06 14:08:43.046630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.038 qpair failed and we were unable to recover it. 00:25:04.038 [2024-11-06 14:08:43.046927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.038 [2024-11-06 14:08:43.046934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.038 qpair failed and we were unable to recover it. 00:25:04.038 [2024-11-06 14:08:43.047213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.038 [2024-11-06 14:08:43.047220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.038 qpair failed and we were unable to recover it. 00:25:04.038 [2024-11-06 14:08:43.047536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.038 [2024-11-06 14:08:43.047543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.038 qpair failed and we were unable to recover it. 00:25:04.038 [2024-11-06 14:08:43.047852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.038 [2024-11-06 14:08:43.047859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.038 qpair failed and we were unable to recover it. 00:25:04.038 [2024-11-06 14:08:43.048161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.038 [2024-11-06 14:08:43.048169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.038 qpair failed and we were unable to recover it. 00:25:04.038 [2024-11-06 14:08:43.048328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.038 [2024-11-06 14:08:43.048336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.038 qpair failed and we were unable to recover it. 00:25:04.038 [2024-11-06 14:08:43.048681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.038 [2024-11-06 14:08:43.048689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.038 qpair failed and we were unable to recover it. 00:25:04.038 [2024-11-06 14:08:43.049001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.038 [2024-11-06 14:08:43.049008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.038 qpair failed and we were unable to recover it. 00:25:04.038 [2024-11-06 14:08:43.049309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.038 [2024-11-06 14:08:43.049316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.038 qpair failed and we were unable to recover it. 00:25:04.038 [2024-11-06 14:08:43.049612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.038 [2024-11-06 14:08:43.049618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.038 qpair failed and we were unable to recover it. 00:25:04.038 [2024-11-06 14:08:43.049899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.038 [2024-11-06 14:08:43.049906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.038 qpair failed and we were unable to recover it. 00:25:04.038 [2024-11-06 14:08:43.050085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.038 [2024-11-06 14:08:43.050092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.038 qpair failed and we were unable to recover it. 00:25:04.038 [2024-11-06 14:08:43.050382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.038 [2024-11-06 14:08:43.050389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.038 qpair failed and we were unable to recover it. 00:25:04.038 [2024-11-06 14:08:43.050694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.038 [2024-11-06 14:08:43.050701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.038 qpair failed and we were unable to recover it. 00:25:04.038 [2024-11-06 14:08:43.051016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.038 [2024-11-06 14:08:43.051023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.038 qpair failed and we were unable to recover it. 00:25:04.038 [2024-11-06 14:08:43.051756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.038 [2024-11-06 14:08:43.051773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.038 qpair failed and we were unable to recover it. 00:25:04.038 [2024-11-06 14:08:43.052073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.038 [2024-11-06 14:08:43.052081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.038 qpair failed and we were unable to recover it. 00:25:04.038 [2024-11-06 14:08:43.052357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.038 [2024-11-06 14:08:43.052365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.038 qpair failed and we were unable to recover it. 00:25:04.038 [2024-11-06 14:08:43.052690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.038 [2024-11-06 14:08:43.052697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.038 qpair failed and we were unable to recover it. 00:25:04.038 [2024-11-06 14:08:43.053015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.038 [2024-11-06 14:08:43.053022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.038 qpair failed and we were unable to recover it. 00:25:04.038 [2024-11-06 14:08:43.053299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.038 [2024-11-06 14:08:43.053306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.038 qpair failed and we were unable to recover it. 00:25:04.038 [2024-11-06 14:08:43.053574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.038 [2024-11-06 14:08:43.053581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.038 qpair failed and we were unable to recover it. 00:25:04.038 [2024-11-06 14:08:43.053877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.038 [2024-11-06 14:08:43.053884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.038 qpair failed and we were unable to recover it. 00:25:04.038 [2024-11-06 14:08:43.054195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.038 [2024-11-06 14:08:43.054202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.038 qpair failed and we were unable to recover it. 00:25:04.038 [2024-11-06 14:08:43.054506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.038 [2024-11-06 14:08:43.054514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.038 qpair failed and we were unable to recover it. 00:25:04.038 [2024-11-06 14:08:43.054805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.038 [2024-11-06 14:08:43.054812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.038 qpair failed and we were unable to recover it. 00:25:04.038 [2024-11-06 14:08:43.055133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.038 [2024-11-06 14:08:43.055142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.038 qpair failed and we were unable to recover it. 00:25:04.038 [2024-11-06 14:08:43.055453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.038 [2024-11-06 14:08:43.055460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.038 qpair failed and we were unable to recover it. 00:25:04.038 [2024-11-06 14:08:43.055731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.038 [2024-11-06 14:08:43.055737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.038 qpair failed and we were unable to recover it. 00:25:04.039 [2024-11-06 14:08:43.055927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.039 [2024-11-06 14:08:43.055934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.039 qpair failed and we were unable to recover it. 00:25:04.039 [2024-11-06 14:08:43.056240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.039 [2024-11-06 14:08:43.056252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.039 qpair failed and we were unable to recover it. 00:25:04.039 [2024-11-06 14:08:43.056588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.039 [2024-11-06 14:08:43.056595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.039 qpair failed and we were unable to recover it. 00:25:04.039 [2024-11-06 14:08:43.056887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.039 [2024-11-06 14:08:43.056894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.039 qpair failed and we were unable to recover it. 00:25:04.039 [2024-11-06 14:08:43.057191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.039 [2024-11-06 14:08:43.057198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.039 qpair failed and we were unable to recover it. 00:25:04.039 [2024-11-06 14:08:43.057518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.039 [2024-11-06 14:08:43.057526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.039 qpair failed and we were unable to recover it. 00:25:04.039 [2024-11-06 14:08:43.057835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.039 [2024-11-06 14:08:43.057842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.039 qpair failed and we were unable to recover it. 00:25:04.039 [2024-11-06 14:08:43.058163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.039 [2024-11-06 14:08:43.058170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.039 qpair failed and we were unable to recover it. 00:25:04.039 [2024-11-06 14:08:43.058352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.039 [2024-11-06 14:08:43.058360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.039 qpair failed and we were unable to recover it. 00:25:04.039 [2024-11-06 14:08:43.058672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.039 [2024-11-06 14:08:43.058679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.039 qpair failed and we were unable to recover it. 00:25:04.039 [2024-11-06 14:08:43.058848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.039 [2024-11-06 14:08:43.058855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.039 qpair failed and we were unable to recover it. 00:25:04.039 [2024-11-06 14:08:43.059151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.039 [2024-11-06 14:08:43.059158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.039 qpair failed and we were unable to recover it. 00:25:04.039 [2024-11-06 14:08:43.059507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.039 [2024-11-06 14:08:43.059514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.039 qpair failed and we were unable to recover it. 00:25:04.039 [2024-11-06 14:08:43.059858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.039 [2024-11-06 14:08:43.059866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.039 qpair failed and we were unable to recover it. 00:25:04.039 [2024-11-06 14:08:43.060208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.039 [2024-11-06 14:08:43.060215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.039 qpair failed and we were unable to recover it. 00:25:04.039 [2024-11-06 14:08:43.060516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.039 [2024-11-06 14:08:43.060522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.039 qpair failed and we were unable to recover it. 00:25:04.039 [2024-11-06 14:08:43.060817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.039 [2024-11-06 14:08:43.060824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.039 qpair failed and we were unable to recover it. 00:25:04.039 [2024-11-06 14:08:43.061101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.039 [2024-11-06 14:08:43.061108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.039 qpair failed and we were unable to recover it. 00:25:04.039 [2024-11-06 14:08:43.061276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.039 [2024-11-06 14:08:43.061284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.039 qpair failed and we were unable to recover it. 00:25:04.039 [2024-11-06 14:08:43.061446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.039 [2024-11-06 14:08:43.061454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.039 qpair failed and we were unable to recover it. 00:25:04.039 [2024-11-06 14:08:43.061813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.039 [2024-11-06 14:08:43.061820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.039 qpair failed and we were unable to recover it. 00:25:04.039 [2024-11-06 14:08:43.062123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.039 [2024-11-06 14:08:43.062130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.039 qpair failed and we were unable to recover it. 00:25:04.039 [2024-11-06 14:08:43.062421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.039 [2024-11-06 14:08:43.062429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.039 qpair failed and we were unable to recover it. 00:25:04.039 [2024-11-06 14:08:43.062785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.039 [2024-11-06 14:08:43.062792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.039 qpair failed and we were unable to recover it. 00:25:04.039 [2024-11-06 14:08:43.063063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.039 [2024-11-06 14:08:43.063070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.039 qpair failed and we were unable to recover it. 00:25:04.039 [2024-11-06 14:08:43.063377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.039 [2024-11-06 14:08:43.063385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.039 qpair failed and we were unable to recover it. 00:25:04.039 [2024-11-06 14:08:43.063678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.039 [2024-11-06 14:08:43.063685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.039 qpair failed and we were unable to recover it. 00:25:04.039 [2024-11-06 14:08:43.064044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.039 [2024-11-06 14:08:43.064051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.039 qpair failed and we were unable to recover it. 00:25:04.039 [2024-11-06 14:08:43.064319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.039 [2024-11-06 14:08:43.064327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.039 qpair failed and we were unable to recover it. 00:25:04.039 [2024-11-06 14:08:43.064592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.039 [2024-11-06 14:08:43.064599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.039 qpair failed and we were unable to recover it. 00:25:04.039 [2024-11-06 14:08:43.064902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.039 [2024-11-06 14:08:43.064909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.039 qpair failed and we were unable to recover it. 00:25:04.039 [2024-11-06 14:08:43.065259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.039 [2024-11-06 14:08:43.065267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.039 qpair failed and we were unable to recover it. 00:25:04.039 [2024-11-06 14:08:43.065597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.039 [2024-11-06 14:08:43.065604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.039 qpair failed and we were unable to recover it. 00:25:04.039 [2024-11-06 14:08:43.065891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.039 [2024-11-06 14:08:43.065897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.039 qpair failed and we were unable to recover it. 00:25:04.039 [2024-11-06 14:08:43.066223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.039 [2024-11-06 14:08:43.066231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.039 qpair failed and we were unable to recover it. 00:25:04.039 [2024-11-06 14:08:43.066612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.039 [2024-11-06 14:08:43.066620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.039 qpair failed and we were unable to recover it. 00:25:04.039 [2024-11-06 14:08:43.066800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.039 [2024-11-06 14:08:43.066808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.039 qpair failed and we were unable to recover it. 00:25:04.040 [2024-11-06 14:08:43.067109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-11-06 14:08:43.067118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-11-06 14:08:43.067385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-11-06 14:08:43.067393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-11-06 14:08:43.067758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-11-06 14:08:43.067765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-11-06 14:08:43.067917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-11-06 14:08:43.067924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-11-06 14:08:43.068478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-11-06 14:08:43.068542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-11-06 14:08:43.068757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-11-06 14:08:43.068782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-11-06 14:08:43.069135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-11-06 14:08:43.069155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-11-06 14:08:43.069626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-11-06 14:08:43.069689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-11-06 14:08:43.070057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-11-06 14:08:43.070083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-11-06 14:08:43.070414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-11-06 14:08:43.070436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-11-06 14:08:43.070785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-11-06 14:08:43.070793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-11-06 14:08:43.071096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-11-06 14:08:43.071103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-11-06 14:08:43.071420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-11-06 14:08:43.071428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-11-06 14:08:43.071722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-11-06 14:08:43.071729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-11-06 14:08:43.071906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-11-06 14:08:43.071914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-11-06 14:08:43.072202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-11-06 14:08:43.072209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-11-06 14:08:43.072502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-11-06 14:08:43.072509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-11-06 14:08:43.072682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-11-06 14:08:43.072689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-11-06 14:08:43.072853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-11-06 14:08:43.072860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-11-06 14:08:43.073202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-11-06 14:08:43.073208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-11-06 14:08:43.073421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-11-06 14:08:43.073429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-11-06 14:08:43.073760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-11-06 14:08:43.073767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-11-06 14:08:43.073915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-11-06 14:08:43.073923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-11-06 14:08:43.074083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-11-06 14:08:43.074090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-11-06 14:08:43.074393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-11-06 14:08:43.074401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-11-06 14:08:43.074708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-11-06 14:08:43.074715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-11-06 14:08:43.075114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-11-06 14:08:43.075121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-11-06 14:08:43.075460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-11-06 14:08:43.075468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-11-06 14:08:43.075744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-11-06 14:08:43.075751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-11-06 14:08:43.076046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-11-06 14:08:43.076053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-11-06 14:08:43.076368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-11-06 14:08:43.076375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-11-06 14:08:43.076655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-11-06 14:08:43.076663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-11-06 14:08:43.076986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-11-06 14:08:43.076993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-11-06 14:08:43.077298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-11-06 14:08:43.077305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-11-06 14:08:43.077484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-11-06 14:08:43.077492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-11-06 14:08:43.077687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-11-06 14:08:43.077694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-11-06 14:08:43.077895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-11-06 14:08:43.077902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-11-06 14:08:43.078015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-11-06 14:08:43.078022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-11-06 14:08:43.078314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-11-06 14:08:43.078322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-11-06 14:08:43.078590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-11-06 14:08:43.078596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-11-06 14:08:43.078887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-11-06 14:08:43.078896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-11-06 14:08:43.079207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-11-06 14:08:43.079214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-11-06 14:08:43.079604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-11-06 14:08:43.079612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-11-06 14:08:43.079800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-11-06 14:08:43.079807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-11-06 14:08:43.080121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-11-06 14:08:43.080128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-11-06 14:08:43.080300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-11-06 14:08:43.080308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-11-06 14:08:43.080592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-11-06 14:08:43.080599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-11-06 14:08:43.080904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-11-06 14:08:43.080911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-11-06 14:08:43.081217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-11-06 14:08:43.081224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-11-06 14:08:43.081504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-11-06 14:08:43.081511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-11-06 14:08:43.081700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-11-06 14:08:43.081707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-11-06 14:08:43.081995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-11-06 14:08:43.082001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-11-06 14:08:43.082352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-11-06 14:08:43.082359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-11-06 14:08:43.082517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-11-06 14:08:43.082525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-11-06 14:08:43.082826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-11-06 14:08:43.082833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-11-06 14:08:43.083117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-11-06 14:08:43.083123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-11-06 14:08:43.083437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-11-06 14:08:43.083445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-11-06 14:08:43.083781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-11-06 14:08:43.083788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-11-06 14:08:43.084074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-11-06 14:08:43.084081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-11-06 14:08:43.084436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-11-06 14:08:43.084443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-11-06 14:08:43.084767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-11-06 14:08:43.084774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-11-06 14:08:43.085049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-11-06 14:08:43.085057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-11-06 14:08:43.085348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-11-06 14:08:43.085356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-11-06 14:08:43.085646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-11-06 14:08:43.085653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-11-06 14:08:43.085978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-11-06 14:08:43.085985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-11-06 14:08:43.086242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-11-06 14:08:43.086252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-11-06 14:08:43.086500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-11-06 14:08:43.086507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-11-06 14:08:43.086751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-11-06 14:08:43.086758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-11-06 14:08:43.087099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.087106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.087445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.087452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.087629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.087636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.087823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.087830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.088124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.088132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.088492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.088499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.088792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.088799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.089101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.089108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.089403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.089410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.089685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.089692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.089855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.089863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.090204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.090211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.090511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.090520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.090812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.090819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.091111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.091118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.091436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.091443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.091733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.091741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.092036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.092044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.092333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.092340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.092457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.092464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.092790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.092797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.093084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.093091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.093396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.093403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.093698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.093705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.094002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.094008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.094208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.094215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.094526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.094533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.094821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.094828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.095074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.095082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.095388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.095395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.095681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.095688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.095984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.095990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.096285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.096292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.096603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.096609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.096929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.096937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.097122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.097130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.097416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.097423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.097746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-11-06 14:08:43.097754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-11-06 14:08:43.098053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-11-06 14:08:43.098060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-11-06 14:08:43.098360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-11-06 14:08:43.098369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-11-06 14:08:43.098544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-11-06 14:08:43.098551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-11-06 14:08:43.098838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-11-06 14:08:43.098845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-11-06 14:08:43.099030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-11-06 14:08:43.099037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-11-06 14:08:43.099306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-11-06 14:08:43.099313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-11-06 14:08:43.099644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-11-06 14:08:43.099651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-11-06 14:08:43.100008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-11-06 14:08:43.100016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-11-06 14:08:43.100213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-11-06 14:08:43.100220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-11-06 14:08:43.100522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-11-06 14:08:43.100529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-11-06 14:08:43.100848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-11-06 14:08:43.100855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-11-06 14:08:43.101069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-11-06 14:08:43.101076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-11-06 14:08:43.101369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-11-06 14:08:43.101376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-11-06 14:08:43.101721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-11-06 14:08:43.101729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-11-06 14:08:43.102072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-11-06 14:08:43.102079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-11-06 14:08:43.102425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-11-06 14:08:43.102432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-11-06 14:08:43.102771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-11-06 14:08:43.102777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-11-06 14:08:43.103080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-11-06 14:08:43.103087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-11-06 14:08:43.103377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-11-06 14:08:43.103384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-11-06 14:08:43.103696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-11-06 14:08:43.103704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-11-06 14:08:43.104010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-11-06 14:08:43.104017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-11-06 14:08:43.104303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-11-06 14:08:43.104310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-11-06 14:08:43.104619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-11-06 14:08:43.104626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-11-06 14:08:43.104922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-11-06 14:08:43.104929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-11-06 14:08:43.105227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-11-06 14:08:43.105234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-11-06 14:08:43.105529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-11-06 14:08:43.105536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-11-06 14:08:43.105846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-11-06 14:08:43.105853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-11-06 14:08:43.106153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-11-06 14:08:43.106161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-11-06 14:08:43.106333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-11-06 14:08:43.106341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-11-06 14:08:43.106531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-11-06 14:08:43.106538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-11-06 14:08:43.106836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-11-06 14:08:43.106844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-11-06 14:08:43.107148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-11-06 14:08:43.107155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-11-06 14:08:43.107463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-11-06 14:08:43.107471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-11-06 14:08:43.107775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-11-06 14:08:43.107782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-11-06 14:08:43.107981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-11-06 14:08:43.107988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-11-06 14:08:43.108260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-11-06 14:08:43.108267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-11-06 14:08:43.108558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-11-06 14:08:43.108565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-11-06 14:08:43.108862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-11-06 14:08:43.108869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-11-06 14:08:43.109156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-11-06 14:08:43.109163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-11-06 14:08:43.109483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-11-06 14:08:43.109490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-11-06 14:08:43.109674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-11-06 14:08:43.109682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-11-06 14:08:43.109981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-11-06 14:08:43.109991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-11-06 14:08:43.110287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-11-06 14:08:43.110294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-11-06 14:08:43.110719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-11-06 14:08:43.110727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-11-06 14:08:43.110885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-11-06 14:08:43.110892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-11-06 14:08:43.111185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-11-06 14:08:43.111194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-11-06 14:08:43.111412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-11-06 14:08:43.111419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-11-06 14:08:43.111636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-11-06 14:08:43.111644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-11-06 14:08:43.111958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-11-06 14:08:43.111966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-11-06 14:08:43.112261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-11-06 14:08:43.112269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-11-06 14:08:43.112559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-11-06 14:08:43.112567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-11-06 14:08:43.112873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-11-06 14:08:43.112880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-11-06 14:08:43.113104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-11-06 14:08:43.113111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-11-06 14:08:43.113412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-11-06 14:08:43.113420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-11-06 14:08:43.113727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-11-06 14:08:43.113735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-11-06 14:08:43.114030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-11-06 14:08:43.114038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-11-06 14:08:43.114395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-11-06 14:08:43.114403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-11-06 14:08:43.114674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-11-06 14:08:43.114681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-11-06 14:08:43.115026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-11-06 14:08:43.115033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-11-06 14:08:43.115312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-11-06 14:08:43.115320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-11-06 14:08:43.115489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-11-06 14:08:43.115496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-11-06 14:08:43.115805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-11-06 14:08:43.115812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-11-06 14:08:43.116097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-11-06 14:08:43.116104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-11-06 14:08:43.116416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-11-06 14:08:43.116424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-11-06 14:08:43.116733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-11-06 14:08:43.116741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-11-06 14:08:43.116902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-11-06 14:08:43.116909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-11-06 14:08:43.117106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-11-06 14:08:43.117114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-11-06 14:08:43.117434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-11-06 14:08:43.117442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-11-06 14:08:43.117629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-11-06 14:08:43.117636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-11-06 14:08:43.117913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-11-06 14:08:43.117921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-11-06 14:08:43.118200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-11-06 14:08:43.118208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-11-06 14:08:43.118580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-11-06 14:08:43.118588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-11-06 14:08:43.118873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-11-06 14:08:43.118880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-11-06 14:08:43.119177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-11-06 14:08:43.119184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-11-06 14:08:43.119501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-11-06 14:08:43.119509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-11-06 14:08:43.119697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-11-06 14:08:43.119705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-11-06 14:08:43.120084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-11-06 14:08:43.120092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-11-06 14:08:43.120408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-11-06 14:08:43.120416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-11-06 14:08:43.120812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-11-06 14:08:43.120820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-11-06 14:08:43.121119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-11-06 14:08:43.121127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-11-06 14:08:43.121431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-11-06 14:08:43.121438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-11-06 14:08:43.121720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-11-06 14:08:43.121730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-11-06 14:08:43.122027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-11-06 14:08:43.122034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-11-06 14:08:43.122307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-11-06 14:08:43.122315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-11-06 14:08:43.122491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-11-06 14:08:43.122499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-11-06 14:08:43.122753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-11-06 14:08:43.122761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-11-06 14:08:43.123079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-11-06 14:08:43.123086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-11-06 14:08:43.123284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-11-06 14:08:43.123292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-11-06 14:08:43.123480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-11-06 14:08:43.123486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-11-06 14:08:43.123809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-11-06 14:08:43.123817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-11-06 14:08:43.124145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-11-06 14:08:43.124152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-11-06 14:08:43.124443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-11-06 14:08:43.124451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-11-06 14:08:43.124614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-11-06 14:08:43.124622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-11-06 14:08:43.124960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-11-06 14:08:43.124968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-11-06 14:08:43.125262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-11-06 14:08:43.125269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-11-06 14:08:43.125623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-11-06 14:08:43.125631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-11-06 14:08:43.125929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-11-06 14:08:43.125936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-11-06 14:08:43.126223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-11-06 14:08:43.126230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-11-06 14:08:43.126559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-11-06 14:08:43.126566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-11-06 14:08:43.126839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-11-06 14:08:43.126847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-11-06 14:08:43.127102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-11-06 14:08:43.127110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-11-06 14:08:43.127406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-11-06 14:08:43.127414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-11-06 14:08:43.127687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-11-06 14:08:43.127694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-11-06 14:08:43.127883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-11-06 14:08:43.127891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.046 [2024-11-06 14:08:43.128209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-11-06 14:08:43.128217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-11-06 14:08:43.128523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-11-06 14:08:43.128531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-11-06 14:08:43.128867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-11-06 14:08:43.128873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-11-06 14:08:43.129148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-11-06 14:08:43.129155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-11-06 14:08:43.129457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-11-06 14:08:43.129464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-11-06 14:08:43.129790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-11-06 14:08:43.129797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-11-06 14:08:43.130075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-11-06 14:08:43.130081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-11-06 14:08:43.130378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-11-06 14:08:43.130385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-11-06 14:08:43.130689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-11-06 14:08:43.130696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-11-06 14:08:43.130949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-11-06 14:08:43.130956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-11-06 14:08:43.131248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-11-06 14:08:43.131256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-11-06 14:08:43.131567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-11-06 14:08:43.131574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-11-06 14:08:43.131891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-11-06 14:08:43.131897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-11-06 14:08:43.132083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-11-06 14:08:43.132089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-11-06 14:08:43.132408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-11-06 14:08:43.132415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-11-06 14:08:43.132744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-11-06 14:08:43.132751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-11-06 14:08:43.133075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-11-06 14:08:43.133083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-11-06 14:08:43.133383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-11-06 14:08:43.133392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-11-06 14:08:43.133562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-11-06 14:08:43.133569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-11-06 14:08:43.133914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-11-06 14:08:43.133921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-11-06 14:08:43.134133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-11-06 14:08:43.134139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-11-06 14:08:43.134415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-11-06 14:08:43.134421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-11-06 14:08:43.134637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-11-06 14:08:43.134644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-11-06 14:08:43.134907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-11-06 14:08:43.134915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-11-06 14:08:43.135221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-11-06 14:08:43.135228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-11-06 14:08:43.135404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-11-06 14:08:43.135411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-11-06 14:08:43.135697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-11-06 14:08:43.135704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-11-06 14:08:43.136013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-11-06 14:08:43.136021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-11-06 14:08:43.136314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-11-06 14:08:43.136321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-11-06 14:08:43.136639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-11-06 14:08:43.136646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-11-06 14:08:43.136944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-11-06 14:08:43.136950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-11-06 14:08:43.137263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-11-06 14:08:43.137270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-11-06 14:08:43.137582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-11-06 14:08:43.137589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-11-06 14:08:43.137835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-11-06 14:08:43.137842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-11-06 14:08:43.138179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-11-06 14:08:43.138186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-11-06 14:08:43.138534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-11-06 14:08:43.138542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-11-06 14:08:43.138864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-11-06 14:08:43.138871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-11-06 14:08:43.139198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-11-06 14:08:43.139205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-11-06 14:08:43.139496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-11-06 14:08:43.139503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-11-06 14:08:43.139885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-11-06 14:08:43.139892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-11-06 14:08:43.140209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-11-06 14:08:43.140216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-11-06 14:08:43.140408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-11-06 14:08:43.140416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-11-06 14:08:43.140826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-11-06 14:08:43.140833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-11-06 14:08:43.141126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-11-06 14:08:43.141134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-11-06 14:08:43.141430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-11-06 14:08:43.141438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-11-06 14:08:43.141730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-11-06 14:08:43.141736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-11-06 14:08:43.142036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-11-06 14:08:43.142043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-11-06 14:08:43.142251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-11-06 14:08:43.142258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-11-06 14:08:43.142575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-11-06 14:08:43.142582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-11-06 14:08:43.142885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-11-06 14:08:43.142892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-11-06 14:08:43.143173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-11-06 14:08:43.143180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-11-06 14:08:43.143487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-11-06 14:08:43.143495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-11-06 14:08:43.143798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-11-06 14:08:43.143806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-11-06 14:08:43.143997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-11-06 14:08:43.144006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-11-06 14:08:43.144278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-11-06 14:08:43.144286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-11-06 14:08:43.144579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-11-06 14:08:43.144585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-11-06 14:08:43.144871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-11-06 14:08:43.144879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-11-06 14:08:43.145170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-11-06 14:08:43.145179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-11-06 14:08:43.145548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-11-06 14:08:43.145555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-11-06 14:08:43.145839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-11-06 14:08:43.145847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-11-06 14:08:43.146114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-11-06 14:08:43.146121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-11-06 14:08:43.146412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-11-06 14:08:43.146419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-11-06 14:08:43.146719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-11-06 14:08:43.146726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-11-06 14:08:43.147056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-11-06 14:08:43.147062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-11-06 14:08:43.147357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-11-06 14:08:43.147364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-11-06 14:08:43.147404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-11-06 14:08:43.147411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-11-06 14:08:43.147701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-11-06 14:08:43.147707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-11-06 14:08:43.147993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-11-06 14:08:43.148000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-11-06 14:08:43.148758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-11-06 14:08:43.148774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-11-06 14:08:43.149051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-11-06 14:08:43.149059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-11-06 14:08:43.149360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-11-06 14:08:43.149367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-11-06 14:08:43.149666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-11-06 14:08:43.149673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-11-06 14:08:43.149976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-11-06 14:08:43.149984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-11-06 14:08:43.150294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-11-06 14:08:43.150302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-11-06 14:08:43.150479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-11-06 14:08:43.150487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-11-06 14:08:43.150774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-11-06 14:08:43.150782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-11-06 14:08:43.151178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-11-06 14:08:43.151185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-11-06 14:08:43.151669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-11-06 14:08:43.151681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-11-06 14:08:43.152005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-11-06 14:08:43.152014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-11-06 14:08:43.152344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-11-06 14:08:43.152352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-11-06 14:08:43.152653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-11-06 14:08:43.152661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-11-06 14:08:43.152954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-11-06 14:08:43.152961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-11-06 14:08:43.153129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-11-06 14:08:43.153137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-11-06 14:08:43.153414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-11-06 14:08:43.153422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-11-06 14:08:43.153734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-11-06 14:08:43.153742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-11-06 14:08:43.154033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-11-06 14:08:43.154040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-11-06 14:08:43.154430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-11-06 14:08:43.154437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-11-06 14:08:43.154752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-11-06 14:08:43.154759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-11-06 14:08:43.155053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-11-06 14:08:43.155060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-11-06 14:08:43.155402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-11-06 14:08:43.155410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-11-06 14:08:43.155752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-11-06 14:08:43.155758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-11-06 14:08:43.156075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-11-06 14:08:43.156082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-11-06 14:08:43.156382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-11-06 14:08:43.156389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-11-06 14:08:43.156689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-11-06 14:08:43.156696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-11-06 14:08:43.156993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-11-06 14:08:43.157001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-11-06 14:08:43.157310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-11-06 14:08:43.157318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-11-06 14:08:43.157671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-11-06 14:08:43.157678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-11-06 14:08:43.157848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-11-06 14:08:43.157856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-11-06 14:08:43.158143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-11-06 14:08:43.158149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-11-06 14:08:43.158453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-11-06 14:08:43.158460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-11-06 14:08:43.158790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-11-06 14:08:43.158797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-11-06 14:08:43.159139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-11-06 14:08:43.159147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-11-06 14:08:43.159482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-11-06 14:08:43.159489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-11-06 14:08:43.159816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-11-06 14:08:43.159823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-11-06 14:08:43.160113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-11-06 14:08:43.160120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-11-06 14:08:43.160415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-11-06 14:08:43.160422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-11-06 14:08:43.160787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-11-06 14:08:43.160795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.161033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.161040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.161355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.161362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.161651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.161658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.161953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.161960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.162329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.162337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.162619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.162627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.162831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.162838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.163025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.163032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.163345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.163352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.163686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.163693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.163879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.163887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.164159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.164166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.164561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.164568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.164861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.164868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.165254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.165261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.165627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.165634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.165835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.165842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.166148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.166155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.166537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.166544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.166716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.166724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.167063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.167070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.167387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.167395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.167698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.167704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.167911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.167919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.168224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.168231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.168553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.168560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.168892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.168900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.169186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.169194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.169496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.169503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.169803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.169810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.170129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.170137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.170436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.170444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.170743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.170750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.170914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.170922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.171259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.171266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.171552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.171559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.171880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.171888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-11-06 14:08:43.172074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-11-06 14:08:43.172081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.050 [2024-11-06 14:08:43.172260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-11-06 14:08:43.172267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-11-06 14:08:43.172463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-11-06 14:08:43.172470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-11-06 14:08:43.172815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-11-06 14:08:43.172822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-11-06 14:08:43.173187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-11-06 14:08:43.173194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-11-06 14:08:43.173483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-11-06 14:08:43.173490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-11-06 14:08:43.173795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-11-06 14:08:43.173802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-11-06 14:08:43.173976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-11-06 14:08:43.173984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-11-06 14:08:43.174298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-11-06 14:08:43.174305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-11-06 14:08:43.174475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-11-06 14:08:43.174482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-11-06 14:08:43.174807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-11-06 14:08:43.174813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-11-06 14:08:43.175123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-11-06 14:08:43.175130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-11-06 14:08:43.175408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-11-06 14:08:43.175416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-11-06 14:08:43.175713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-11-06 14:08:43.175720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-11-06 14:08:43.175884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-11-06 14:08:43.175891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-11-06 14:08:43.176202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-11-06 14:08:43.176208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-11-06 14:08:43.176385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-11-06 14:08:43.176392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-11-06 14:08:43.176703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-11-06 14:08:43.176710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-11-06 14:08:43.177036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-11-06 14:08:43.177043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-11-06 14:08:43.177369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-11-06 14:08:43.177376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-11-06 14:08:43.177677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-11-06 14:08:43.177683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-11-06 14:08:43.177997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-11-06 14:08:43.178004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-11-06 14:08:43.178314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-11-06 14:08:43.178321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-11-06 14:08:43.178503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-11-06 14:08:43.178510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-11-06 14:08:43.178779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-11-06 14:08:43.178786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-11-06 14:08:43.179094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-11-06 14:08:43.179101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-11-06 14:08:43.179412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-11-06 14:08:43.179419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-11-06 14:08:43.179732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-11-06 14:08:43.179739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-11-06 14:08:43.179772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-11-06 14:08:43.179779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-11-06 14:08:43.180173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-11-06 14:08:43.180181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-11-06 14:08:43.180465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-11-06 14:08:43.180473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-11-06 14:08:43.180802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-11-06 14:08:43.180808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-11-06 14:08:43.181090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-11-06 14:08:43.181097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.051 [2024-11-06 14:08:43.181512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-11-06 14:08:43.181521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-11-06 14:08:43.181779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-11-06 14:08:43.181787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-11-06 14:08:43.181987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-11-06 14:08:43.181994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-11-06 14:08:43.182285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-11-06 14:08:43.182292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-11-06 14:08:43.182665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-11-06 14:08:43.182671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-11-06 14:08:43.182953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-11-06 14:08:43.182960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-11-06 14:08:43.183265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-11-06 14:08:43.183273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-11-06 14:08:43.183599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-11-06 14:08:43.183607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-11-06 14:08:43.183878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-11-06 14:08:43.183885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-11-06 14:08:43.184166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-11-06 14:08:43.184174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-11-06 14:08:43.184480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-11-06 14:08:43.184487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-11-06 14:08:43.184808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-11-06 14:08:43.184814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-11-06 14:08:43.185186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-11-06 14:08:43.185193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-11-06 14:08:43.185374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-11-06 14:08:43.185382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-11-06 14:08:43.185697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-11-06 14:08:43.185705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-11-06 14:08:43.186049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-11-06 14:08:43.186056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-11-06 14:08:43.186363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-11-06 14:08:43.186370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-11-06 14:08:43.186670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-11-06 14:08:43.186677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-11-06 14:08:43.187016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-11-06 14:08:43.187023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-11-06 14:08:43.187180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-11-06 14:08:43.187188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-11-06 14:08:43.187539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-11-06 14:08:43.187546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-11-06 14:08:43.187840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-11-06 14:08:43.187847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-11-06 14:08:43.188146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-11-06 14:08:43.188153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-11-06 14:08:43.188444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-11-06 14:08:43.188451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-11-06 14:08:43.188779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-11-06 14:08:43.188787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-11-06 14:08:43.188947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-11-06 14:08:43.188955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-11-06 14:08:43.189125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-11-06 14:08:43.189131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-11-06 14:08:43.189470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-11-06 14:08:43.189478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-11-06 14:08:43.189774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-11-06 14:08:43.189781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-11-06 14:08:43.190113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-11-06 14:08:43.190120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-11-06 14:08:43.190408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-11-06 14:08:43.190416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-11-06 14:08:43.190735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-11-06 14:08:43.190742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-11-06 14:08:43.191028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-11-06 14:08:43.191035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-11-06 14:08:43.191338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-11-06 14:08:43.191345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-11-06 14:08:43.191646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-11-06 14:08:43.191652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-11-06 14:08:43.191984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-11-06 14:08:43.191991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-11-06 14:08:43.192307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-11-06 14:08:43.192314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-11-06 14:08:43.192611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-11-06 14:08:43.192618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-11-06 14:08:43.192898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-11-06 14:08:43.192905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-11-06 14:08:43.193203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-11-06 14:08:43.193210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-11-06 14:08:43.193512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-11-06 14:08:43.193520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-11-06 14:08:43.193849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-11-06 14:08:43.193857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-11-06 14:08:43.193977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-11-06 14:08:43.193984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-11-06 14:08:43.194321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-11-06 14:08:43.194328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-11-06 14:08:43.194622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-11-06 14:08:43.194629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-11-06 14:08:43.194916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-11-06 14:08:43.194922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-11-06 14:08:43.195209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-11-06 14:08:43.195216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-11-06 14:08:43.195516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-11-06 14:08:43.195523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-11-06 14:08:43.195866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-11-06 14:08:43.195873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-11-06 14:08:43.196166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-11-06 14:08:43.196173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-11-06 14:08:43.196353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-11-06 14:08:43.196361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-11-06 14:08:43.196622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-11-06 14:08:43.196629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-11-06 14:08:43.196778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-11-06 14:08:43.196786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-11-06 14:08:43.197078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-11-06 14:08:43.197085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-11-06 14:08:43.197407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-11-06 14:08:43.197414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-11-06 14:08:43.197591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-11-06 14:08:43.197598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-11-06 14:08:43.197895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-11-06 14:08:43.197902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-11-06 14:08:43.198224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-11-06 14:08:43.198231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-11-06 14:08:43.198531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-11-06 14:08:43.198538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-11-06 14:08:43.198722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-11-06 14:08:43.198729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-11-06 14:08:43.198995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-11-06 14:08:43.199002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-11-06 14:08:43.199300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-11-06 14:08:43.199308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-11-06 14:08:43.199600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-11-06 14:08:43.199607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-11-06 14:08:43.199885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-11-06 14:08:43.199892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-11-06 14:08:43.200174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-11-06 14:08:43.200181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-11-06 14:08:43.200488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-11-06 14:08:43.200495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-11-06 14:08:43.200686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-11-06 14:08:43.200693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-11-06 14:08:43.201031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-11-06 14:08:43.201039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-11-06 14:08:43.201339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-11-06 14:08:43.201346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-11-06 14:08:43.201640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-11-06 14:08:43.201646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-11-06 14:08:43.202001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-11-06 14:08:43.202009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-11-06 14:08:43.202296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-11-06 14:08:43.202304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-11-06 14:08:43.202646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-11-06 14:08:43.202653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-11-06 14:08:43.202940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-11-06 14:08:43.202946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.203262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-11-06 14:08:43.203269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.203614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-11-06 14:08:43.203621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.203973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-11-06 14:08:43.203980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.204155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-11-06 14:08:43.204162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.204438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-11-06 14:08:43.204446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.204612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-11-06 14:08:43.204619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.204876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-11-06 14:08:43.204885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.205183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-11-06 14:08:43.205190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.205494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-11-06 14:08:43.205501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.205832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-11-06 14:08:43.205839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.206035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-11-06 14:08:43.206042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.206356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-11-06 14:08:43.206363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.206655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-11-06 14:08:43.206662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.206855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-11-06 14:08:43.206865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.207250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-11-06 14:08:43.207257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.207555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-11-06 14:08:43.207562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.207742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-11-06 14:08:43.207748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.208014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-11-06 14:08:43.208021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.208340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-11-06 14:08:43.208348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.208732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-11-06 14:08:43.208740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.209023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-11-06 14:08:43.209029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.209320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-11-06 14:08:43.209327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.209514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-11-06 14:08:43.209521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.209713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-11-06 14:08:43.209720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.210016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-11-06 14:08:43.210024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.210310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-11-06 14:08:43.210318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.210486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-11-06 14:08:43.210494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.210822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-11-06 14:08:43.210829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.211017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-11-06 14:08:43.211024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.211298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-11-06 14:08:43.211305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.211603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-11-06 14:08:43.211610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.211864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-11-06 14:08:43.211871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.212182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-11-06 14:08:43.212189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.212474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-11-06 14:08:43.212481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.212777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-11-06 14:08:43.212785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.213068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-11-06 14:08:43.213075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.213385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-11-06 14:08:43.213393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-11-06 14:08:43.213727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-11-06 14:08:43.213734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-11-06 14:08:43.214041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-11-06 14:08:43.214049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-11-06 14:08:43.214346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-11-06 14:08:43.214354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-11-06 14:08:43.214518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-11-06 14:08:43.214525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-11-06 14:08:43.214873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-11-06 14:08:43.214880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-11-06 14:08:43.215166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-11-06 14:08:43.215173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-11-06 14:08:43.215510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-11-06 14:08:43.215518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-11-06 14:08:43.215845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-11-06 14:08:43.215853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-11-06 14:08:43.216171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-11-06 14:08:43.216178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-11-06 14:08:43.216476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-11-06 14:08:43.216486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-11-06 14:08:43.216795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-11-06 14:08:43.216803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-11-06 14:08:43.217131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-11-06 14:08:43.217138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-11-06 14:08:43.217418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-11-06 14:08:43.217425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-11-06 14:08:43.217734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-11-06 14:08:43.217741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-11-06 14:08:43.218044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-11-06 14:08:43.218052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-11-06 14:08:43.218350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-11-06 14:08:43.218358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-11-06 14:08:43.218684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-11-06 14:08:43.218691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-11-06 14:08:43.218950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-11-06 14:08:43.218957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-11-06 14:08:43.219252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-11-06 14:08:43.219259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-11-06 14:08:43.219556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-11-06 14:08:43.219563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-11-06 14:08:43.219859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-11-06 14:08:43.219867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-11-06 14:08:43.220164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-11-06 14:08:43.220171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-11-06 14:08:43.220474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-11-06 14:08:43.220482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-11-06 14:08:43.220812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-11-06 14:08:43.220818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-11-06 14:08:43.221137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-11-06 14:08:43.221143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-11-06 14:08:43.221474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-11-06 14:08:43.221482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-11-06 14:08:43.221791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-11-06 14:08:43.221797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-11-06 14:08:43.222083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-11-06 14:08:43.222091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-11-06 14:08:43.222389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-11-06 14:08:43.222397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-11-06 14:08:43.222690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-11-06 14:08:43.222697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-11-06 14:08:43.222921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-11-06 14:08:43.222929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-11-06 14:08:43.223257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-11-06 14:08:43.223264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-11-06 14:08:43.223615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-11-06 14:08:43.223622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-11-06 14:08:43.223773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-11-06 14:08:43.223780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-11-06 14:08:43.224076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-11-06 14:08:43.224082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-11-06 14:08:43.224401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-11-06 14:08:43.224408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-11-06 14:08:43.224743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-11-06 14:08:43.224750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-11-06 14:08:43.225043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-11-06 14:08:43.225050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-11-06 14:08:43.225416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-11-06 14:08:43.225423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-11-06 14:08:43.225830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-11-06 14:08:43.225836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-11-06 14:08:43.226124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-11-06 14:08:43.226132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-11-06 14:08:43.226425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-11-06 14:08:43.226434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-11-06 14:08:43.226717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-11-06 14:08:43.226724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-11-06 14:08:43.227019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-11-06 14:08:43.227026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-11-06 14:08:43.227313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-11-06 14:08:43.227320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-11-06 14:08:43.227676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-11-06 14:08:43.227684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-11-06 14:08:43.227985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-11-06 14:08:43.227992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-11-06 14:08:43.228328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-11-06 14:08:43.228336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-11-06 14:08:43.228618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-11-06 14:08:43.228626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-11-06 14:08:43.228923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-11-06 14:08:43.228932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-11-06 14:08:43.229241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-11-06 14:08:43.229253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-11-06 14:08:43.229535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-11-06 14:08:43.229543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-11-06 14:08:43.229843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-11-06 14:08:43.229851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-11-06 14:08:43.230150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-11-06 14:08:43.230158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-11-06 14:08:43.230330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-11-06 14:08:43.230337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-11-06 14:08:43.230693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-11-06 14:08:43.230701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-11-06 14:08:43.230996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-11-06 14:08:43.231004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-11-06 14:08:43.231294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-11-06 14:08:43.231302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-11-06 14:08:43.231652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-11-06 14:08:43.231660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-11-06 14:08:43.231826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-11-06 14:08:43.231834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-11-06 14:08:43.232139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-11-06 14:08:43.232146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-11-06 14:08:43.232551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-11-06 14:08:43.232559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-11-06 14:08:43.232857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-11-06 14:08:43.232864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-11-06 14:08:43.233212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-11-06 14:08:43.233220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-11-06 14:08:43.233511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-11-06 14:08:43.233519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.056 [2024-11-06 14:08:43.233811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-11-06 14:08:43.233819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-11-06 14:08:43.234114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-11-06 14:08:43.234122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-11-06 14:08:43.234425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-11-06 14:08:43.234433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-11-06 14:08:43.234768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-11-06 14:08:43.234776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-11-06 14:08:43.235065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-11-06 14:08:43.235073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-11-06 14:08:43.235237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-11-06 14:08:43.235248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-11-06 14:08:43.235591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-11-06 14:08:43.235598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-11-06 14:08:43.235907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-11-06 14:08:43.235914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-11-06 14:08:43.236217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-11-06 14:08:43.236225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-11-06 14:08:43.236540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-11-06 14:08:43.236548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-11-06 14:08:43.236845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-11-06 14:08:43.236853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-11-06 14:08:43.236902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-11-06 14:08:43.236910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-11-06 14:08:43.237062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-11-06 14:08:43.237069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-11-06 14:08:43.237198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-11-06 14:08:43.237206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-11-06 14:08:43.237565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-11-06 14:08:43.237574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-11-06 14:08:43.237869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-11-06 14:08:43.237877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-11-06 14:08:43.238157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-11-06 14:08:43.238164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-11-06 14:08:43.238351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-11-06 14:08:43.238359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-11-06 14:08:43.238617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-11-06 14:08:43.238625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-11-06 14:08:43.238915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-11-06 14:08:43.238922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-11-06 14:08:43.239104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-11-06 14:08:43.239111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-11-06 14:08:43.239410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-11-06 14:08:43.239417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-11-06 14:08:43.239701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-11-06 14:08:43.239708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-11-06 14:08:43.239869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-11-06 14:08:43.239877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-11-06 14:08:43.240182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-11-06 14:08:43.240191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-11-06 14:08:43.240488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-11-06 14:08:43.240497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-11-06 14:08:43.240785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-11-06 14:08:43.240793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-11-06 14:08:43.240983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-11-06 14:08:43.240990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-11-06 14:08:43.241303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-11-06 14:08:43.241310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-11-06 14:08:43.241626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-11-06 14:08:43.241633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-11-06 14:08:43.241798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-11-06 14:08:43.241806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-11-06 14:08:43.242136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-11-06 14:08:43.242144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-11-06 14:08:43.242507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.242514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-11-06 14:08:43.242821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.242829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-11-06 14:08:43.243134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.243141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-11-06 14:08:43.243329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.243337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-11-06 14:08:43.243644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.243652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-11-06 14:08:43.243948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.243955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-11-06 14:08:43.244254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.244262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-11-06 14:08:43.244418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.244426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-11-06 14:08:43.244616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.244624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-11-06 14:08:43.244909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.244917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-11-06 14:08:43.245206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.245214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-11-06 14:08:43.245527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.245535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-11-06 14:08:43.245727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.245735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-11-06 14:08:43.246046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.246054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-11-06 14:08:43.246247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.246255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-11-06 14:08:43.246552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.246560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-11-06 14:08:43.246847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.246855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-11-06 14:08:43.247153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.247160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-11-06 14:08:43.247323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.247331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-11-06 14:08:43.247640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.247649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-11-06 14:08:43.247949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.247957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-11-06 14:08:43.248247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.248254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-11-06 14:08:43.248562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.248568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-11-06 14:08:43.248883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.248890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-11-06 14:08:43.249167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.249174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-11-06 14:08:43.249484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.249491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-11-06 14:08:43.249882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.249890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-11-06 14:08:43.250253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.250261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-11-06 14:08:43.250624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.250631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-11-06 14:08:43.250913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.250920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-11-06 14:08:43.251290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.251297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-11-06 14:08:43.251480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.251487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-11-06 14:08:43.251667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.251675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-11-06 14:08:43.251999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.252006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-11-06 14:08:43.252389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.252396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-11-06 14:08:43.252689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.252696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-11-06 14:08:43.252993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.253000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-11-06 14:08:43.253302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-11-06 14:08:43.253309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.058 [2024-11-06 14:08:43.253623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-11-06 14:08:43.253630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-11-06 14:08:43.253993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-11-06 14:08:43.254000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-11-06 14:08:43.254293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-11-06 14:08:43.254300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-11-06 14:08:43.254580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-11-06 14:08:43.254587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-11-06 14:08:43.254905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-11-06 14:08:43.254913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-11-06 14:08:43.255204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-11-06 14:08:43.255211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-11-06 14:08:43.255515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-11-06 14:08:43.255523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-11-06 14:08:43.255816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-11-06 14:08:43.255825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-11-06 14:08:43.256127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-11-06 14:08:43.256135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-11-06 14:08:43.256424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-11-06 14:08:43.256431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-11-06 14:08:43.256598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-11-06 14:08:43.256606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-11-06 14:08:43.256884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-11-06 14:08:43.256891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-11-06 14:08:43.257158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-11-06 14:08:43.257165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-11-06 14:08:43.257472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-11-06 14:08:43.257480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-11-06 14:08:43.257688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-11-06 14:08:43.257695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-11-06 14:08:43.258008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-11-06 14:08:43.258015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-11-06 14:08:43.258300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-11-06 14:08:43.258308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-11-06 14:08:43.258627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-11-06 14:08:43.258634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-11-06 14:08:43.258927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-11-06 14:08:43.258935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-11-06 14:08:43.259249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-11-06 14:08:43.259256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-11-06 14:08:43.259579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-11-06 14:08:43.259586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-11-06 14:08:43.259910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-11-06 14:08:43.259919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-11-06 14:08:43.260215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-11-06 14:08:43.260222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-11-06 14:08:43.260528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-11-06 14:08:43.260537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-11-06 14:08:43.260709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-11-06 14:08:43.260717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-11-06 14:08:43.260872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-11-06 14:08:43.260881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-11-06 14:08:43.261075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-11-06 14:08:43.261082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-11-06 14:08:43.261432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-11-06 14:08:43.261439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-11-06 14:08:43.261648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-11-06 14:08:43.261655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-11-06 14:08:43.261902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-11-06 14:08:43.261909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-11-06 14:08:43.262087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-11-06 14:08:43.262094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-11-06 14:08:43.262362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-11-06 14:08:43.262369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-11-06 14:08:43.262623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-11-06 14:08:43.262630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-11-06 14:08:43.262945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-11-06 14:08:43.262952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-11-06 14:08:43.263257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-11-06 14:08:43.263264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-11-06 14:08:43.263633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-11-06 14:08:43.263640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-11-06 14:08:43.263988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-11-06 14:08:43.263996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-11-06 14:08:43.264214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-11-06 14:08:43.264221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-11-06 14:08:43.264452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-11-06 14:08:43.264459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-11-06 14:08:43.264716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-11-06 14:08:43.264723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-11-06 14:08:43.264974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-11-06 14:08:43.264981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-11-06 14:08:43.265286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-11-06 14:08:43.265293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-11-06 14:08:43.265350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-11-06 14:08:43.265358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-11-06 14:08:43.265669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-11-06 14:08:43.265707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-11-06 14:08:43.266057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-11-06 14:08:43.266069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-11-06 14:08:43.266410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-11-06 14:08:43.266433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-11-06 14:08:43.266788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-11-06 14:08:43.266799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-11-06 14:08:43.266982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-11-06 14:08:43.266992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-11-06 14:08:43.267468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-11-06 14:08:43.267506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-11-06 14:08:43.267810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-11-06 14:08:43.267819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-11-06 14:08:43.268008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-11-06 14:08:43.268016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-11-06 14:08:43.268346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-11-06 14:08:43.268354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-11-06 14:08:43.268589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-11-06 14:08:43.268596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-11-06 14:08:43.268781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-11-06 14:08:43.268787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-11-06 14:08:43.269126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-11-06 14:08:43.269133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-11-06 14:08:43.269433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-11-06 14:08:43.269440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-11-06 14:08:43.269661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-11-06 14:08:43.269668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-11-06 14:08:43.269963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-11-06 14:08:43.269969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-11-06 14:08:43.270314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-11-06 14:08:43.270322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-11-06 14:08:43.270635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-11-06 14:08:43.270642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-11-06 14:08:43.270922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-11-06 14:08:43.270929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-11-06 14:08:43.271233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-11-06 14:08:43.271251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-11-06 14:08:43.271498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-11-06 14:08:43.271505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-11-06 14:08:43.271825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-11-06 14:08:43.271832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-11-06 14:08:43.271981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-11-06 14:08:43.271988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-11-06 14:08:43.272320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-11-06 14:08:43.272329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-11-06 14:08:43.272641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-11-06 14:08:43.272648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-11-06 14:08:43.272964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-11-06 14:08:43.272971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.273288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.273295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.273589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.273597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.273943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.273951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.274344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.274351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.274593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.274600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.274909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.274916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.275202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.275209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.275457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.275464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.275768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.275775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.276050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.276057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.276261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.276268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.276483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.276490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.276789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.276796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.277061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.277069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.277415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.277423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.277683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.277690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.277801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.277808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.278101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.278107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.278401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.278409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.278754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.278761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.279064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.279071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.279353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.279360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.279533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.279539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.279798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.279806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.280098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.280105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.280432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.280439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.280726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.280733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.281031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.281038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.281328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.281335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.281517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.281524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.281807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.281814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.282165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.282172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.282544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.282552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.282848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.282857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.283132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.283140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.283357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.283365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.283507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.283515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-11-06 14:08:43.283810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-11-06 14:08:43.283817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-11-06 14:08:43.284135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-11-06 14:08:43.284142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-11-06 14:08:43.284314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-11-06 14:08:43.284321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-11-06 14:08:43.284556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-11-06 14:08:43.284564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-11-06 14:08:43.284852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-11-06 14:08:43.284859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-11-06 14:08:43.285049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-11-06 14:08:43.285056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-11-06 14:08:43.285367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-11-06 14:08:43.285374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-11-06 14:08:43.285565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-11-06 14:08:43.285572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-11-06 14:08:43.285898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-11-06 14:08:43.285905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-11-06 14:08:43.286208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-11-06 14:08:43.286215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-11-06 14:08:43.286524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-11-06 14:08:43.286531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-11-06 14:08:43.286793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-11-06 14:08:43.286800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-11-06 14:08:43.287003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-11-06 14:08:43.287010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-11-06 14:08:43.287321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-11-06 14:08:43.287328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-11-06 14:08:43.287627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-11-06 14:08:43.287635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-11-06 14:08:43.287841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-11-06 14:08:43.287849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-11-06 14:08:43.288127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-11-06 14:08:43.288134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-11-06 14:08:43.288528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-11-06 14:08:43.288535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-11-06 14:08:43.288818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-11-06 14:08:43.288826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-11-06 14:08:43.289163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-11-06 14:08:43.289170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-11-06 14:08:43.289353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-11-06 14:08:43.289360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-11-06 14:08:43.289548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-11-06 14:08:43.289555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-11-06 14:08:43.290039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-11-06 14:08:43.290120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-11-06 14:08:43.290653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-11-06 14:08:43.290733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-11-06 14:08:43.291057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-11-06 14:08:43.291066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-11-06 14:08:43.291292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-11-06 14:08:43.291299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-11-06 14:08:43.291679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-11-06 14:08:43.291686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-11-06 14:08:43.292026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-11-06 14:08:43.292032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-11-06 14:08:43.292433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-11-06 14:08:43.292440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-11-06 14:08:43.292712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-11-06 14:08:43.292719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-11-06 14:08:43.293030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-11-06 14:08:43.293038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-11-06 14:08:43.293236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-11-06 14:08:43.293248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-11-06 14:08:43.293519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-11-06 14:08:43.293527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-11-06 14:08:43.293823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-11-06 14:08:43.293830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-11-06 14:08:43.293991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-11-06 14:08:43.293998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.062 [2024-11-06 14:08:43.294326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-11-06 14:08:43.294333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-11-06 14:08:43.294486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-11-06 14:08:43.294495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-11-06 14:08:43.294846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-11-06 14:08:43.294854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-11-06 14:08:43.295070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-11-06 14:08:43.295077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-11-06 14:08:43.295249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-11-06 14:08:43.295256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-11-06 14:08:43.295572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-11-06 14:08:43.295579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-11-06 14:08:43.295962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-11-06 14:08:43.295969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-11-06 14:08:43.296255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-11-06 14:08:43.296263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-11-06 14:08:43.296643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-11-06 14:08:43.296650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-11-06 14:08:43.296849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-11-06 14:08:43.296856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-11-06 14:08:43.297179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-11-06 14:08:43.297187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-11-06 14:08:43.297351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-11-06 14:08:43.297358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-11-06 14:08:43.297678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-11-06 14:08:43.297685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-11-06 14:08:43.297977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-11-06 14:08:43.297983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-11-06 14:08:43.298219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-11-06 14:08:43.298225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-11-06 14:08:43.298528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-11-06 14:08:43.298536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-11-06 14:08:43.298839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-11-06 14:08:43.298846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-11-06 14:08:43.299098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-11-06 14:08:43.299104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-11-06 14:08:43.299420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-11-06 14:08:43.299427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.343 [2024-11-06 14:08:43.299730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.343 [2024-11-06 14:08:43.299738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.343 qpair failed and we were unable to recover it. 00:25:04.343 [2024-11-06 14:08:43.300045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.343 [2024-11-06 14:08:43.300052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.343 qpair failed and we were unable to recover it. 00:25:04.343 [2024-11-06 14:08:43.300237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.343 [2024-11-06 14:08:43.300250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.343 qpair failed and we were unable to recover it. 00:25:04.343 [2024-11-06 14:08:43.300561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.343 [2024-11-06 14:08:43.300568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.343 qpair failed and we were unable to recover it. 00:25:04.343 [2024-11-06 14:08:43.300883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.343 [2024-11-06 14:08:43.300890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.343 qpair failed and we were unable to recover it. 00:25:04.343 [2024-11-06 14:08:43.301034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.343 [2024-11-06 14:08:43.301041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.343 qpair failed and we were unable to recover it. 00:25:04.343 [2024-11-06 14:08:43.301276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.343 [2024-11-06 14:08:43.301284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.343 qpair failed and we were unable to recover it. 00:25:04.343 [2024-11-06 14:08:43.301571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.343 [2024-11-06 14:08:43.301577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.343 qpair failed and we were unable to recover it. 00:25:04.343 [2024-11-06 14:08:43.301903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.343 [2024-11-06 14:08:43.301910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.343 qpair failed and we were unable to recover it. 00:25:04.343 [2024-11-06 14:08:43.302203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.343 [2024-11-06 14:08:43.302210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.343 qpair failed and we were unable to recover it. 00:25:04.343 [2024-11-06 14:08:43.302501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.343 [2024-11-06 14:08:43.302508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.343 qpair failed and we were unable to recover it. 00:25:04.343 [2024-11-06 14:08:43.302678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.343 [2024-11-06 14:08:43.302685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.343 qpair failed and we were unable to recover it. 00:25:04.343 [2024-11-06 14:08:43.302898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.343 [2024-11-06 14:08:43.302905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.343 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.303191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.303198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.303405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.303412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.303699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.303705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.304035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.304042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.304345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.304352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.304661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.304667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.304822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.304829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.305218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.305225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.305530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.305537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.305829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.305837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.306135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.306142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.306553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.306560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.306846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.306853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.307184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.307190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.307279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.307286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.307601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.307608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.307873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.307880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.308040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.308047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.308229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.308235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.308563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.308570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.308736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.308743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.309069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.309076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.309282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.309289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.309589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.309596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.309848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.309855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.310156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.310163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.310542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.310549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.310895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.310902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.311098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.311105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.311385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.311392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.311704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.311711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.312064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.312071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.312403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.312409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.312698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.312705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.313001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.313008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.313298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.313305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.313649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.313655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.313808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.344 [2024-11-06 14:08:43.313815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.344 qpair failed and we were unable to recover it. 00:25:04.344 [2024-11-06 14:08:43.314124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.314131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.314300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.314307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.314603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.314610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.314900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.314907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.315070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.315076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.315450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.315457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.315634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.315641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.315864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.315870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.316134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.316141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.316349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.316356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.316606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.316612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.316650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.316659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.316982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.316989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.317281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.317288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.317614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.317622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.317770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.317778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.317920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.317927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.318261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.318269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.318620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.318627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.318915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.318922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.319194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.319202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.319562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.319569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.319855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.319862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.320146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.320153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.320542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.320549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.320800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.320806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.321008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.321015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.321311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.321318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.321672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.321679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.322002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.322009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.322299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.322306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.322585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.322592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.322862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.322869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.323183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.323190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.323498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.323505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.323820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.323827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.324122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.324130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.324763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.324780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.345 qpair failed and we were unable to recover it. 00:25:04.345 [2024-11-06 14:08:43.324997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.345 [2024-11-06 14:08:43.325005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.346 [2024-11-06 14:08:43.325271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.346 [2024-11-06 14:08:43.325279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.346 [2024-11-06 14:08:43.325664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.346 [2024-11-06 14:08:43.325671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.346 [2024-11-06 14:08:43.325834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.346 [2024-11-06 14:08:43.325840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.346 [2024-11-06 14:08:43.326121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.346 [2024-11-06 14:08:43.326128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.346 [2024-11-06 14:08:43.326499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.346 [2024-11-06 14:08:43.326506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.346 [2024-11-06 14:08:43.326779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.346 [2024-11-06 14:08:43.326786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.346 [2024-11-06 14:08:43.327079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.346 [2024-11-06 14:08:43.327086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.346 [2024-11-06 14:08:43.327289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.346 [2024-11-06 14:08:43.327296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.346 [2024-11-06 14:08:43.327642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.346 [2024-11-06 14:08:43.327648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.346 [2024-11-06 14:08:43.327920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.346 [2024-11-06 14:08:43.327927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.346 [2024-11-06 14:08:43.328254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.346 [2024-11-06 14:08:43.328262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.346 [2024-11-06 14:08:43.328471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.346 [2024-11-06 14:08:43.328478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.346 [2024-11-06 14:08:43.328736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.346 [2024-11-06 14:08:43.328745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.346 [2024-11-06 14:08:43.329076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.346 [2024-11-06 14:08:43.329083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.346 [2024-11-06 14:08:43.329419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.346 [2024-11-06 14:08:43.329426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.346 [2024-11-06 14:08:43.329697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.346 [2024-11-06 14:08:43.329704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.346 [2024-11-06 14:08:43.330006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.346 [2024-11-06 14:08:43.330012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.346 [2024-11-06 14:08:43.330311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.346 [2024-11-06 14:08:43.330318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.346 [2024-11-06 14:08:43.330628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.346 [2024-11-06 14:08:43.330635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.346 [2024-11-06 14:08:43.330899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.346 [2024-11-06 14:08:43.330906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.346 [2024-11-06 14:08:43.331167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.346 [2024-11-06 14:08:43.331174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.346 [2024-11-06 14:08:43.331532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.346 [2024-11-06 14:08:43.331539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.346 [2024-11-06 14:08:43.331720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.346 [2024-11-06 14:08:43.331727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.346 [2024-11-06 14:08:43.331897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.346 [2024-11-06 14:08:43.331904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.346 [2024-11-06 14:08:43.332254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.346 [2024-11-06 14:08:43.332262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.346 [2024-11-06 14:08:43.332507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.346 [2024-11-06 14:08:43.332513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.346 [2024-11-06 14:08:43.332828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.346 [2024-11-06 14:08:43.332836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.346 [2024-11-06 14:08:43.333151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.346 [2024-11-06 14:08:43.333158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.346 [2024-11-06 14:08:43.333441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.346 [2024-11-06 14:08:43.333449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.346 [2024-11-06 14:08:43.333819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.346 [2024-11-06 14:08:43.333826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.346 [2024-11-06 14:08:43.334102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.346 [2024-11-06 14:08:43.334110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.346 [2024-11-06 14:08:43.334469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.346 [2024-11-06 14:08:43.334477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.346 [2024-11-06 14:08:43.334758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.346 [2024-11-06 14:08:43.334765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.346 [2024-11-06 14:08:43.335100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.346 [2024-11-06 14:08:43.335107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.346 [2024-11-06 14:08:43.335294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.346 [2024-11-06 14:08:43.335301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.346 [2024-11-06 14:08:43.335612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.346 [2024-11-06 14:08:43.335618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.346 [2024-11-06 14:08:43.335918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.346 [2024-11-06 14:08:43.335925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.346 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.336227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.336233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.347 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.336415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.336423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.347 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.336791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.336798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.347 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.337048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.337055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.347 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.337242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.337253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.347 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.337588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.337594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.347 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.337735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.337742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.347 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.338032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.338039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.347 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.338396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.338403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.347 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.338703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.338710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.347 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.339007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.339014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.347 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.339304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.339311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.347 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.339647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.339654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.347 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.339938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.339945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.347 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.340131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.340138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.347 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.340323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.340333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.347 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.340670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.340677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.347 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.340970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.340977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.347 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.341139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.341145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.347 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.341337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.341344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.347 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.341406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.341412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.347 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.341721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.341727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.347 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.342015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.342022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.347 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.342259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.342266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.347 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.342567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.342574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.347 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.342785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.342792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.347 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.343078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.343084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.347 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.343404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.343411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.347 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.343710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.343716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.347 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.343875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.343883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.347 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.344228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.344235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.347 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.344554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.344561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.347 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.344736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.344743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.347 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.345047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.345053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.347 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.345285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.345292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.347 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.345490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.345496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.347 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.345811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.345818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.347 qpair failed and we were unable to recover it. 00:25:04.347 [2024-11-06 14:08:43.346036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.347 [2024-11-06 14:08:43.346043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.348 qpair failed and we were unable to recover it. 00:25:04.348 [2024-11-06 14:08:43.346351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.348 [2024-11-06 14:08:43.346358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.348 qpair failed and we were unable to recover it. 00:25:04.348 [2024-11-06 14:08:43.346544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.348 [2024-11-06 14:08:43.346551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.348 qpair failed and we were unable to recover it. 00:25:04.348 [2024-11-06 14:08:43.346733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.348 [2024-11-06 14:08:43.346740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.348 qpair failed and we were unable to recover it. 00:25:04.348 [2024-11-06 14:08:43.346908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.348 [2024-11-06 14:08:43.346915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.348 qpair failed and we were unable to recover it. 00:25:04.348 [2024-11-06 14:08:43.347231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.348 [2024-11-06 14:08:43.347238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.348 qpair failed and we were unable to recover it. 00:25:04.348 [2024-11-06 14:08:43.347469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.348 [2024-11-06 14:08:43.347476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.348 qpair failed and we were unable to recover it. 00:25:04.348 [2024-11-06 14:08:43.347744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.348 [2024-11-06 14:08:43.347751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.348 qpair failed and we were unable to recover it. 00:25:04.348 [2024-11-06 14:08:43.347947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.348 [2024-11-06 14:08:43.347955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.348 qpair failed and we were unable to recover it. 00:25:04.348 [2024-11-06 14:08:43.348322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.348 [2024-11-06 14:08:43.348329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.348 qpair failed and we were unable to recover it. 00:25:04.348 [2024-11-06 14:08:43.348517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.348 [2024-11-06 14:08:43.348524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.348 qpair failed and we were unable to recover it. 00:25:04.348 [2024-11-06 14:08:43.348812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.348 [2024-11-06 14:08:43.348819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.348 qpair failed and we were unable to recover it. 00:25:04.348 [2024-11-06 14:08:43.348983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.348 [2024-11-06 14:08:43.348990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.348 qpair failed and we were unable to recover it. 00:25:04.348 [2024-11-06 14:08:43.349311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.348 [2024-11-06 14:08:43.349318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.348 qpair failed and we were unable to recover it. 00:25:04.348 [2024-11-06 14:08:43.349514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.348 [2024-11-06 14:08:43.349521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.348 qpair failed and we were unable to recover it. 00:25:04.348 [2024-11-06 14:08:43.349830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.348 [2024-11-06 14:08:43.349837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.348 qpair failed and we were unable to recover it. 00:25:04.348 [2024-11-06 14:08:43.350004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.348 [2024-11-06 14:08:43.350011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.348 qpair failed and we were unable to recover it. 00:25:04.348 [2024-11-06 14:08:43.350234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.348 [2024-11-06 14:08:43.350241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.348 qpair failed and we were unable to recover it. 00:25:04.348 [2024-11-06 14:08:43.350633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.348 [2024-11-06 14:08:43.350640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.348 qpair failed and we were unable to recover it. 00:25:04.348 [2024-11-06 14:08:43.351012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.348 [2024-11-06 14:08:43.351019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.348 qpair failed and we were unable to recover it. 00:25:04.348 [2024-11-06 14:08:43.351230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.348 [2024-11-06 14:08:43.351240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.348 qpair failed and we were unable to recover it. 00:25:04.348 [2024-11-06 14:08:43.351441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.348 [2024-11-06 14:08:43.351448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.348 qpair failed and we were unable to recover it. 00:25:04.348 [2024-11-06 14:08:43.351757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.348 [2024-11-06 14:08:43.351763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.348 qpair failed and we were unable to recover it. 00:25:04.348 [2024-11-06 14:08:43.352046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.348 [2024-11-06 14:08:43.352052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.348 qpair failed and we were unable to recover it. 00:25:04.348 [2024-11-06 14:08:43.352235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.348 [2024-11-06 14:08:43.352242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.348 qpair failed and we were unable to recover it. 00:25:04.348 [2024-11-06 14:08:43.352531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.348 [2024-11-06 14:08:43.352538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.348 qpair failed and we were unable to recover it. 00:25:04.348 [2024-11-06 14:08:43.352810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.348 [2024-11-06 14:08:43.352817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.348 qpair failed and we were unable to recover it. 00:25:04.348 [2024-11-06 14:08:43.353203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.348 [2024-11-06 14:08:43.353210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.348 qpair failed and we were unable to recover it. 00:25:04.348 [2024-11-06 14:08:43.353347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.348 [2024-11-06 14:08:43.353355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.348 qpair failed and we were unable to recover it. 00:25:04.348 [2024-11-06 14:08:43.353650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.348 [2024-11-06 14:08:43.353658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.348 qpair failed and we were unable to recover it. 00:25:04.348 [2024-11-06 14:08:43.353918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.348 [2024-11-06 14:08:43.353926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.348 qpair failed and we were unable to recover it. 00:25:04.348 [2024-11-06 14:08:43.354119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.348 [2024-11-06 14:08:43.354126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.348 qpair failed and we were unable to recover it. 00:25:04.348 [2024-11-06 14:08:43.354522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.348 [2024-11-06 14:08:43.354529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.348 qpair failed and we were unable to recover it. 00:25:04.348 [2024-11-06 14:08:43.354701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.348 [2024-11-06 14:08:43.354707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.348 qpair failed and we were unable to recover it. 00:25:04.348 [2024-11-06 14:08:43.355064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.355071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.355263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.355270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.355630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.355637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.355841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.355849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.356151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.356157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.356474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.356481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.356756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.356763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.357047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.357053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.357338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.357345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.357619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.357626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.357821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.357828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.358179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.358187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.358528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.358536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.358850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.358857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.359162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.359170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.359482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.359489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.359825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.359832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.360109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.360115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.360321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.360328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.360614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.360621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.360920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.360927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.360989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.360996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.361306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.361313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.361672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.361678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.361974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.361980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.362168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.362175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.362482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.362489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.362793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.362799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.363101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.363108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.363341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.363348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.363677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.363683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.363814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.363821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.364116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.364123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.364423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.364430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.364721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.364727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.365094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.365101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.365433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.365440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.365822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.365830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.349 [2024-11-06 14:08:43.366129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.349 [2024-11-06 14:08:43.366137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.349 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.366343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.366350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.366706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.366713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.366922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.366929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.367237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.367247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.367435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.367442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.367587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.367595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.367900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.367907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.368083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.368090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.368258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.368265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.368593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.368600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.368763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.368769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.369121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.369129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.369471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.369480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.369806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.369813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.370095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.370102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.370423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.370430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.370799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.370806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.371142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.371149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.371329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.371337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.371606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.371613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.371917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.371924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.372217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.372223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.372558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.372565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.372854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.372861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.372916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.372923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.373265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.373273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.373638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.373645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.373924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.373931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.374092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.374099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.374302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.374309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.374470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.374477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.374721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.374728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.375016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.375023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.375294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.375301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.375457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.375465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.375759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.375766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.376083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.376090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.350 [2024-11-06 14:08:43.376382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.350 [2024-11-06 14:08:43.376389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.350 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.376684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.376691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.351 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.376990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.376997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.351 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.377336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.377343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.351 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.377416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.377423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.351 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.377748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.377755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.351 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.378104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.378112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.351 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.378159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.378165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.351 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.378517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.378524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.351 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.378801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.378808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.351 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.379082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.379089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.351 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.379316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.379323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.351 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.379606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.379613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.351 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.379880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.379887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.351 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.380181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.380188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.351 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.380609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.380618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.351 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.380765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.380772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.351 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.381114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.381120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.351 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.381489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.381497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.351 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.381546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.381553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.351 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.381709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.381716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.351 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.382012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.382018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.351 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.382263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.382270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.351 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.382597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.382604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.351 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.382908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.382915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.351 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.383047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.383053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.351 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.383365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.383372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.351 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.383652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.383659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.351 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.383962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.383968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.351 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.384289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.384297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.351 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.384532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.384539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.351 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.384716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.384722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.351 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.384975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.384982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.351 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.385264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.385270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.351 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.385637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.385644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.351 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.385737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.385743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.351 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.386010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.386016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.351 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.386373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.386380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.351 qpair failed and we were unable to recover it. 00:25:04.351 [2024-11-06 14:08:43.386548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.351 [2024-11-06 14:08:43.386554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.352 qpair failed and we were unable to recover it. 00:25:04.352 [2024-11-06 14:08:43.386892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.352 [2024-11-06 14:08:43.386899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.352 qpair failed and we were unable to recover it. 00:25:04.352 [2024-11-06 14:08:43.387215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.352 [2024-11-06 14:08:43.387221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.352 qpair failed and we were unable to recover it. 00:25:04.352 [2024-11-06 14:08:43.387543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.352 [2024-11-06 14:08:43.387551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.352 qpair failed and we were unable to recover it. 00:25:04.352 [2024-11-06 14:08:43.387744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.352 [2024-11-06 14:08:43.387752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.352 qpair failed and we were unable to recover it. 00:25:04.352 [2024-11-06 14:08:43.387953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.352 [2024-11-06 14:08:43.387960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.352 qpair failed and we were unable to recover it. 00:25:04.352 [2024-11-06 14:08:43.388143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.352 [2024-11-06 14:08:43.388150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.352 qpair failed and we were unable to recover it. 00:25:04.352 [2024-11-06 14:08:43.388550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.352 [2024-11-06 14:08:43.388557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.352 qpair failed and we were unable to recover it. 00:25:04.352 [2024-11-06 14:08:43.388860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.352 [2024-11-06 14:08:43.388866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.352 qpair failed and we were unable to recover it. 00:25:04.352 [2024-11-06 14:08:43.389126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.352 [2024-11-06 14:08:43.389133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.352 qpair failed and we were unable to recover it. 00:25:04.352 [2024-11-06 14:08:43.389318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.352 [2024-11-06 14:08:43.389325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.352 qpair failed and we were unable to recover it. 00:25:04.352 [2024-11-06 14:08:43.389611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.352 [2024-11-06 14:08:43.389618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.352 qpair failed and we were unable to recover it. 00:25:04.352 [2024-11-06 14:08:43.389956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.352 [2024-11-06 14:08:43.389963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.352 qpair failed and we were unable to recover it. 00:25:04.352 [2024-11-06 14:08:43.390155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.352 [2024-11-06 14:08:43.390164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.352 qpair failed and we were unable to recover it. 00:25:04.352 [2024-11-06 14:08:43.390491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.352 [2024-11-06 14:08:43.390498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.352 qpair failed and we were unable to recover it. 00:25:04.352 [2024-11-06 14:08:43.390781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.352 [2024-11-06 14:08:43.390787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.352 qpair failed and we were unable to recover it. 00:25:04.352 [2024-11-06 14:08:43.391068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.352 [2024-11-06 14:08:43.391074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.352 qpair failed and we were unable to recover it. 00:25:04.352 [2024-11-06 14:08:43.391396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.352 [2024-11-06 14:08:43.391404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.352 qpair failed and we were unable to recover it. 00:25:04.352 [2024-11-06 14:08:43.391703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.352 [2024-11-06 14:08:43.391710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.352 qpair failed and we were unable to recover it. 00:25:04.352 [2024-11-06 14:08:43.391877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.352 [2024-11-06 14:08:43.391884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.352 qpair failed and we were unable to recover it. 00:25:04.352 [2024-11-06 14:08:43.392173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.352 [2024-11-06 14:08:43.392180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.352 qpair failed and we were unable to recover it. 00:25:04.352 [2024-11-06 14:08:43.392566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.352 [2024-11-06 14:08:43.392573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.352 qpair failed and we were unable to recover it. 00:25:04.352 [2024-11-06 14:08:43.392646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.352 [2024-11-06 14:08:43.392652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.352 qpair failed and we were unable to recover it. 00:25:04.352 [2024-11-06 14:08:43.392975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.352 [2024-11-06 14:08:43.392982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.352 qpair failed and we were unable to recover it. 00:25:04.352 [2024-11-06 14:08:43.393349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.352 [2024-11-06 14:08:43.393356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.352 qpair failed and we were unable to recover it. 00:25:04.352 [2024-11-06 14:08:43.393650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.352 [2024-11-06 14:08:43.393657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.352 qpair failed and we were unable to recover it. 00:25:04.352 [2024-11-06 14:08:43.393813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.352 [2024-11-06 14:08:43.393820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.352 qpair failed and we were unable to recover it. 00:25:04.352 [2024-11-06 14:08:43.394095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.352 [2024-11-06 14:08:43.394102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.352 qpair failed and we were unable to recover it. 00:25:04.352 [2024-11-06 14:08:43.394410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.352 [2024-11-06 14:08:43.394417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.352 qpair failed and we were unable to recover it. 00:25:04.352 [2024-11-06 14:08:43.394805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.352 [2024-11-06 14:08:43.394812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.352 qpair failed and we were unable to recover it. 00:25:04.352 [2024-11-06 14:08:43.395109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.352 [2024-11-06 14:08:43.395116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.352 qpair failed and we were unable to recover it. 00:25:04.352 [2024-11-06 14:08:43.395291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.352 [2024-11-06 14:08:43.395298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.352 qpair failed and we were unable to recover it. 00:25:04.352 [2024-11-06 14:08:43.395629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.352 [2024-11-06 14:08:43.395637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.352 qpair failed and we were unable to recover it. 00:25:04.352 [2024-11-06 14:08:43.396069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.352 [2024-11-06 14:08:43.396076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.352 qpair failed and we were unable to recover it. 00:25:04.352 [2024-11-06 14:08:43.396355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.352 [2024-11-06 14:08:43.396362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.352 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.396731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.396738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.397006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.397013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.397290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.397297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.397403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.397409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.397691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.397698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.397852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.397859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.398214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.398221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.398477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.398484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.398796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.398803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.399118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.399126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.399417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.399424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.399589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.399595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.399786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.399794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.400111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.400118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.400439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.400446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.400749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.400756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.401068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.401075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.401249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.401257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.401576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.401583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.401766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.401773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.402084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.402090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.402510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.402517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.402796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.402804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.403092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.403099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.403336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.403343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.403594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.403601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.403873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.403880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.404232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.404239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.404446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.404454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.404725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.404732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.405065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.405072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.405325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.405332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.405625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.405632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.405965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.405971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.406383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.406390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.406611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.406619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.406916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.406923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.353 [2024-11-06 14:08:43.407071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.353 [2024-11-06 14:08:43.407077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.353 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.407375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.407382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.407643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.407650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.408040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.408047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.408262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.408269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.408642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.408649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.408862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.408870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.409159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.409166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.409552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.409559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.409708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.409715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.409969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.409976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.410268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.410276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.410555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.410562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.410847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.410854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.411029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.411036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.411335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.411342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.411617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.411624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.411924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.411931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.412221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.412228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.412536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.412543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.412711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.412719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.412896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.412904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.413061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.413069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.413286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.413294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.413589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.413596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.413756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.413765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.414051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.414058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.414220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.414227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.414533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.414540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.414811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.414818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.415122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.415129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.415301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.415308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.415588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.415595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.415909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.415916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.416203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.416210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.416405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.416412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.416705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.416712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.417007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.417014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.417317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.354 [2024-11-06 14:08:43.417325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.354 qpair failed and we were unable to recover it. 00:25:04.354 [2024-11-06 14:08:43.417683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.417691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.355 [2024-11-06 14:08:43.417873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.417881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.355 [2024-11-06 14:08:43.418167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.418173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.355 [2024-11-06 14:08:43.418350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.418357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.355 [2024-11-06 14:08:43.418664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.418671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.355 [2024-11-06 14:08:43.418973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.418980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.355 [2024-11-06 14:08:43.419253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.419261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.355 [2024-11-06 14:08:43.419546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.419553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.355 [2024-11-06 14:08:43.419885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.419892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.355 [2024-11-06 14:08:43.420164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.420171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.355 [2024-11-06 14:08:43.420470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.420477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.355 [2024-11-06 14:08:43.420763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.420770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.355 [2024-11-06 14:08:43.421058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.421065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.355 [2024-11-06 14:08:43.421402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.421410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.355 [2024-11-06 14:08:43.421719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.421727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.355 [2024-11-06 14:08:43.422002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.422010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.355 [2024-11-06 14:08:43.422286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.422293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.355 [2024-11-06 14:08:43.422505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.422512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.355 [2024-11-06 14:08:43.422840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.422846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.355 [2024-11-06 14:08:43.423154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.423161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.355 [2024-11-06 14:08:43.423453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.423460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.355 [2024-11-06 14:08:43.423624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.423632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.355 [2024-11-06 14:08:43.423972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.423978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.355 [2024-11-06 14:08:43.424358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.424365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.355 [2024-11-06 14:08:43.424636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.424643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.355 [2024-11-06 14:08:43.425005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.425012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.355 [2024-11-06 14:08:43.425169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.425179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.355 [2024-11-06 14:08:43.425483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.425491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.355 [2024-11-06 14:08:43.425861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.425867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.355 [2024-11-06 14:08:43.426156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.426163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.355 [2024-11-06 14:08:43.426442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.426449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.355 [2024-11-06 14:08:43.426749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.426757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.355 [2024-11-06 14:08:43.427049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.427056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.355 [2024-11-06 14:08:43.427485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.427492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.355 [2024-11-06 14:08:43.427801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.427807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.355 [2024-11-06 14:08:43.428120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.428127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.355 [2024-11-06 14:08:43.428424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.428431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.355 [2024-11-06 14:08:43.428623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.355 [2024-11-06 14:08:43.428629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.355 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.428824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.428831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.429146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.429154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.429263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.429271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.429558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.429565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.429863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.429871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.430177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.430183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.430399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.430407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.430736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.430744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.431060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.431066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.431233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.431239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.431581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.431588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.431896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.431903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.432167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.432174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.432590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.432597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.432871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.432878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.433126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.433133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.433323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.433330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.433568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.433575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.433913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.433921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.434223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.434230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.434517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.434524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.434814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.434821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.434993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.435000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.435247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.435254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.435479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.435486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.435808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.435815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.436099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.436106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.436488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.436496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.436828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.436837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.437129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.437137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.437402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.437409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.437713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.437720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.437922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.437928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.438221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.438228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.438521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.438529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.438827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.438834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.439180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.439187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.356 [2024-11-06 14:08:43.439364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.356 [2024-11-06 14:08:43.439372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.356 qpair failed and we were unable to recover it. 00:25:04.357 [2024-11-06 14:08:43.439591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.357 [2024-11-06 14:08:43.439598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.357 qpair failed and we were unable to recover it. 00:25:04.357 [2024-11-06 14:08:43.439805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.357 [2024-11-06 14:08:43.439812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.357 qpair failed and we were unable to recover it. 00:25:04.357 [2024-11-06 14:08:43.440077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.357 [2024-11-06 14:08:43.440084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.357 qpair failed and we were unable to recover it. 00:25:04.357 [2024-11-06 14:08:43.440372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.357 [2024-11-06 14:08:43.440379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.357 qpair failed and we were unable to recover it. 00:25:04.357 [2024-11-06 14:08:43.440670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.357 [2024-11-06 14:08:43.440677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.357 qpair failed and we were unable to recover it. 00:25:04.357 [2024-11-06 14:08:43.440961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.357 [2024-11-06 14:08:43.440969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.357 qpair failed and we were unable to recover it. 00:25:04.357 [2024-11-06 14:08:43.441251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.357 [2024-11-06 14:08:43.441258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.357 qpair failed and we were unable to recover it. 00:25:04.357 [2024-11-06 14:08:43.441614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.357 [2024-11-06 14:08:43.441621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.357 qpair failed and we were unable to recover it. 00:25:04.357 [2024-11-06 14:08:43.441913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.357 [2024-11-06 14:08:43.441919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.357 qpair failed and we were unable to recover it. 00:25:04.357 [2024-11-06 14:08:43.442227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.357 [2024-11-06 14:08:43.442233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.357 qpair failed and we were unable to recover it. 00:25:04.357 [2024-11-06 14:08:43.442530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.357 [2024-11-06 14:08:43.442538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.357 qpair failed and we were unable to recover it. 00:25:04.357 [2024-11-06 14:08:43.442835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.357 [2024-11-06 14:08:43.442842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.357 qpair failed and we were unable to recover it. 00:25:04.357 [2024-11-06 14:08:43.443124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.357 [2024-11-06 14:08:43.443131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.357 qpair failed and we were unable to recover it. 00:25:04.357 [2024-11-06 14:08:43.443457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.357 [2024-11-06 14:08:43.443464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.357 qpair failed and we were unable to recover it. 00:25:04.357 [2024-11-06 14:08:43.443793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.357 [2024-11-06 14:08:43.443801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.357 qpair failed and we were unable to recover it. 00:25:04.357 [2024-11-06 14:08:43.444121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.357 [2024-11-06 14:08:43.444128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.357 qpair failed and we were unable to recover it. 00:25:04.357 [2024-11-06 14:08:43.444475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.357 [2024-11-06 14:08:43.444481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.357 qpair failed and we were unable to recover it. 00:25:04.357 [2024-11-06 14:08:43.444754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.357 [2024-11-06 14:08:43.444761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.357 qpair failed and we were unable to recover it. 00:25:04.357 [2024-11-06 14:08:43.445120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.357 [2024-11-06 14:08:43.445126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.357 qpair failed and we were unable to recover it. 00:25:04.357 [2024-11-06 14:08:43.445349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.357 [2024-11-06 14:08:43.445356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.357 qpair failed and we were unable to recover it. 00:25:04.357 [2024-11-06 14:08:43.445521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.357 [2024-11-06 14:08:43.445528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.357 qpair failed and we were unable to recover it. 00:25:04.357 [2024-11-06 14:08:43.445815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.357 [2024-11-06 14:08:43.445822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.357 qpair failed and we were unable to recover it. 00:25:04.357 [2024-11-06 14:08:43.446198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.357 [2024-11-06 14:08:43.446205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.357 qpair failed and we were unable to recover it. 00:25:04.357 [2024-11-06 14:08:43.446549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.357 [2024-11-06 14:08:43.446557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.357 qpair failed and we were unable to recover it. 00:25:04.357 [2024-11-06 14:08:43.446709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.357 [2024-11-06 14:08:43.446716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.357 qpair failed and we were unable to recover it. 00:25:04.357 [2024-11-06 14:08:43.447013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.357 [2024-11-06 14:08:43.447020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.357 qpair failed and we were unable to recover it. 00:25:04.357 [2024-11-06 14:08:43.447317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.357 [2024-11-06 14:08:43.447324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.357 qpair failed and we were unable to recover it. 00:25:04.357 [2024-11-06 14:08:43.447623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.357 [2024-11-06 14:08:43.447631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.357 qpair failed and we were unable to recover it. 00:25:04.357 [2024-11-06 14:08:43.447915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.357 [2024-11-06 14:08:43.447922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.357 qpair failed and we were unable to recover it. 00:25:04.357 [2024-11-06 14:08:43.448206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.357 [2024-11-06 14:08:43.448212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.357 qpair failed and we were unable to recover it. 00:25:04.357 [2024-11-06 14:08:43.448562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.357 [2024-11-06 14:08:43.448571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.357 qpair failed and we were unable to recover it. 00:25:04.357 [2024-11-06 14:08:43.448899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.357 [2024-11-06 14:08:43.448906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.357 qpair failed and we were unable to recover it. 00:25:04.357 [2024-11-06 14:08:43.449063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.357 [2024-11-06 14:08:43.449070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.357 qpair failed and we were unable to recover it. 00:25:04.357 [2024-11-06 14:08:43.449394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.357 [2024-11-06 14:08:43.449402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.357 qpair failed and we were unable to recover it. 00:25:04.357 [2024-11-06 14:08:43.449731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.357 [2024-11-06 14:08:43.449738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.450083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.450090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.450365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.450373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.450677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.450685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.450878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.450885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.451180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.451187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.451353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.451361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.451633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.451640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.451945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.451952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.452113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.452121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.452440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.452447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.452730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.452738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.453029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.453037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.453358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.453365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.453536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.453543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.453861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.453868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.454186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.454193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.454408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.454415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.454737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.454744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.455046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.455053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.455380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.455388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.455694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.455700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.455988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.455995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.456281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.456289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.456597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.456604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.456973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.456980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.457260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.457267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.457449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.457456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.457816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.457823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.457988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.457995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.458207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.458215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.458508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.458516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.458798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.458805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.459101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.459108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.459278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.459285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.459631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.459638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.459991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.460000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.460296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.460304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.358 [2024-11-06 14:08:43.460605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.358 [2024-11-06 14:08:43.460612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.358 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.460910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.460917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.461299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.461306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.461605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.461612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.461902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.461910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.462196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.462204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.462523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.462530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.462816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.462824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.463115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.463122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.463328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.463335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.463687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.463694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.464069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.464075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.464515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.464522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.464834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.464841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.465132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.465138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.465492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.465501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.465715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.465722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.466049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.466056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.466389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.466396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.466738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.466744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.467037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.467045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.467337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.467345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.467548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.467555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.467845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.467852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.468209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.468217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.468524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.468532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.468826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.468833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.469139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.469146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.469451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.469458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.469612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.469620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.469956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.469964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.470285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.470292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.470601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.470607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.470911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.470918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.471220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.471227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.471411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.471419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.471752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.471760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.471943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.471950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.472263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.359 [2024-11-06 14:08:43.472272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.359 qpair failed and we were unable to recover it. 00:25:04.359 [2024-11-06 14:08:43.472574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.472580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-11-06 14:08:43.472755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.472761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-11-06 14:08:43.473038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.473044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-11-06 14:08:43.473428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.473436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-11-06 14:08:43.473789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.473795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-11-06 14:08:43.474081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.474088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-11-06 14:08:43.474292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.474299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-11-06 14:08:43.474614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.474621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-11-06 14:08:43.474973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.474981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-11-06 14:08:43.475323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.475330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-11-06 14:08:43.475628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.475635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-11-06 14:08:43.475934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.475941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-11-06 14:08:43.476237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.476251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-11-06 14:08:43.476560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.476568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-11-06 14:08:43.476855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.476863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-11-06 14:08:43.477065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.477072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-11-06 14:08:43.477339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.477346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-11-06 14:08:43.477659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.477666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-11-06 14:08:43.477945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.477953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-11-06 14:08:43.478288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.478296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-11-06 14:08:43.478586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.478594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-11-06 14:08:43.478872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.478879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-11-06 14:08:43.479172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.479180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-11-06 14:08:43.479492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.479499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-11-06 14:08:43.479773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.479780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-11-06 14:08:43.480137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.480148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-11-06 14:08:43.480466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.480474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-11-06 14:08:43.480772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.480779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-11-06 14:08:43.481071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.481078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-11-06 14:08:43.481375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.481382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-11-06 14:08:43.481583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.481590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-11-06 14:08:43.481982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.481989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-11-06 14:08:43.482157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.482165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-11-06 14:08:43.482445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.482452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-11-06 14:08:43.482755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.482762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-11-06 14:08:43.483047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.483054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-11-06 14:08:43.483279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.483286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-11-06 14:08:43.483614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-11-06 14:08:43.483622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-11-06 14:08:43.483907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-11-06 14:08:43.483913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-11-06 14:08:43.484263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-11-06 14:08:43.484272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-11-06 14:08:43.484628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-11-06 14:08:43.484635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-11-06 14:08:43.484926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-11-06 14:08:43.484932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-11-06 14:08:43.485236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-11-06 14:08:43.485249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-11-06 14:08:43.485511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-11-06 14:08:43.485518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-11-06 14:08:43.485833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-11-06 14:08:43.485840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-11-06 14:08:43.486144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-11-06 14:08:43.486151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-11-06 14:08:43.486346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-11-06 14:08:43.486353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-11-06 14:08:43.486536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-11-06 14:08:43.486544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-11-06 14:08:43.486863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-11-06 14:08:43.486869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-11-06 14:08:43.487153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-11-06 14:08:43.487159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-11-06 14:08:43.487555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-11-06 14:08:43.487563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-11-06 14:08:43.487849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-11-06 14:08:43.487857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-11-06 14:08:43.488155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-11-06 14:08:43.488162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-11-06 14:08:43.488474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-11-06 14:08:43.488481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-11-06 14:08:43.488812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-11-06 14:08:43.488819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-11-06 14:08:43.489107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-11-06 14:08:43.489114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-11-06 14:08:43.489503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-11-06 14:08:43.489510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-11-06 14:08:43.489847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-11-06 14:08:43.489853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-11-06 14:08:43.490019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-11-06 14:08:43.490027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-11-06 14:08:43.490324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-11-06 14:08:43.490331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-11-06 14:08:43.490644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-11-06 14:08:43.490651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-11-06 14:08:43.490952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-11-06 14:08:43.490959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-11-06 14:08:43.491268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-11-06 14:08:43.491276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-11-06 14:08:43.491568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-11-06 14:08:43.491575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-11-06 14:08:43.491866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-11-06 14:08:43.491873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-11-06 14:08:43.492156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-11-06 14:08:43.492163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-11-06 14:08:43.492551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-11-06 14:08:43.492558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-11-06 14:08:43.492849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-11-06 14:08:43.492855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-11-06 14:08:43.493021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-11-06 14:08:43.493028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-11-06 14:08:43.493318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-11-06 14:08:43.493326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-11-06 14:08:43.493666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-11-06 14:08:43.493673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-11-06 14:08:43.493977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.493984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.494318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.494326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.494614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.494622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.494920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.494927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.495138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.495145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.495317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.495324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.495643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.495650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.495925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.495932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.496230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.496237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.496630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.496638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.496910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.496917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.497267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.497275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.497471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.497479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.497776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.497783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.498090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.498098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.498271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.498278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.498549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.498555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.498843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.498850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.499132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.499139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.499404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.499411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.499723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.499730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.500033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.500041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.500251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.500259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.500551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.500558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.500841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.500848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.501157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.501164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.501475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.501483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.501843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.501850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.502000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.502008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.502282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.502289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.502576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.502583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.502867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.502874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.503142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.503150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.503357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.503364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.503725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.503732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.504017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.504026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.504387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.504394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.504664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.504672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.504971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.504978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.505269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.505276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-11-06 14:08:43.505480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-11-06 14:08:43.505487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.505748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.505755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.506057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.506064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.506364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.506371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.506654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.506661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.506961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.506968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.507266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.507273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.507609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.507616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.507918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.507925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.508223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.508230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.508524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.508532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.508863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.508870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.509165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.509172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.509473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.509480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.509817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.509823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.510105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.510112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.510422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.510430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.510735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.510742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.511028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.511035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.511332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.511339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.511664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.511671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.511959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.511966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.512256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.512263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.512473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.512480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.512816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.512823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.513102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.513109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.513407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.513414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.513711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.513718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.513892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.513899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.514191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.514198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.514385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.514392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.514722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.514729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.515017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.515024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.515304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.515312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.515638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.515645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.515963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.515971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.516131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.516139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.516436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.516443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.516762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.516769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.517086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.517093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.517263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.517270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.517628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.517635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-11-06 14:08:43.517943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-11-06 14:08:43.517950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.518120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.518127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.518428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.518435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.518750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.518757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.519041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.519048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.519349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.519356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.519690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.519697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.520081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.520088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.520371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.520379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.520552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.520559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.520876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.520883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.521173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.521181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.521375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.521382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.521685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.521692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.521997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.522003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.522323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.522330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.522484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.522491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.522822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.522829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.523140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.523147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.523459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.523466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.523763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.523770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.524057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.524064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.524260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.524267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.524544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.524551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.524905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.524911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.525281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.525288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.525620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.525627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.525912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.525919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.526210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.526217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.526504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.526511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.526814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.526821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.527123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.527130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.527433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.527440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.527762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.527770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.528114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.528121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.528306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.528313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.528576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.528583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.528874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.528881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.529165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.529172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.529497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.529504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.529794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.529800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.530144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.530151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.530459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.530466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.530820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.530826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-11-06 14:08:43.531116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-11-06 14:08:43.531123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.531412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.531419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.531711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.531718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.532012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.532018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.532358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.532365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.532662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.532669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.532955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.532961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.533313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.533320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.533502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.533509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.533771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.533778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.533979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.533985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.534273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.534280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.534563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.534569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.534914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.534921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.535264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.535271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.535566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.535573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.535858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.535865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.536157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.536164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.536362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.536369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.536673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.536680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.536999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.537006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.537293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.537300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.537507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.537514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.537693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.537700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.538055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.538062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.538360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.538368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.538677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.538684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.538854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.538861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.539188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.539195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.539349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.539358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.539693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.539700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.540006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.540013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.540295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.540302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.540592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.540599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.540907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.540914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.541246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.541253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.541569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.541576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.541867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.541874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.542163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.542170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.542464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.542471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.542660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.542667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.542983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-11-06 14:08:43.542990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-11-06 14:08:43.543185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.543192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.543465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.543472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.543773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.543779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.543987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.543994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.544303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.544309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.544606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.544613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.544944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.544951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.545234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.545241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.545554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.545561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.545746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.545753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.546082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.546089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.546279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.546286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.546561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.546568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.546883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.546889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.547173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.547179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.547482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.547489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.547675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.547682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.547989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.547995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.548371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.548378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.548556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.548563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.548891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.548898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.549201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.549207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.549507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.549514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.549852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.549859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.550161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.550168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.550464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.550471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.550658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.550665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.550952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.550960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.551131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.551139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.551468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.551475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.551782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.551789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.551991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.551998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.552342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.552349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.552492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.552498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.552842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.552849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.553030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.553036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.553306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.553313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.553635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.553642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.553962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.553968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.554161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.554168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.554344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.554351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.554647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.554654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.554945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.554952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.555249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-11-06 14:08:43.555256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-11-06 14:08:43.555563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.555570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.555782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.555789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.556112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.556119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.556411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.556418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.556732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.556739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.557060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.557066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.557389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.557396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.557676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.557683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.558010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.558017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.558315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.558322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.558633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.558640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.558821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.558828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.559175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.559182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.559483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.559490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.559807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.559814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.560131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.560138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.560367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.560374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.560775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.560781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.561066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.561073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.561365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.561372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.561584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.561592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.561872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.561879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.562181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.562188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.562477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.562486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.562867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.562873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.563182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.563189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.563470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.563477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.563777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.563784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.564070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.564077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.564364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.564371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.564661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.564668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.564980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.564987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.565269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.565277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.565607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.565613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.565786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.565794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.566102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.566109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.566315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.566323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.566608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.566614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.567001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.567007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.567301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.567308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.567514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.567521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.567810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.567816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.568208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.568215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.568499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-11-06 14:08:43.568507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-11-06 14:08:43.568672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.568680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.568975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.568982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.569277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.569284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.569645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.569651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.569943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.569950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.570119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.570126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.570455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.570462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.570750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.570756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.571064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.571071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.571215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.571221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.571562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.571569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.571852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.571859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.572151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.572158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.572503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.572510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.572809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.572816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.573115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.573122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.573421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.573428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.573742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.573749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.574037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.574044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.574325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.574334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.574666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.574673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.574928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.574935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.575238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.575249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.575545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.575552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.575880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.575887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.576177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.576184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.576466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.576473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.576808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.576815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.577164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.577170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.577365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.577372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.577653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.577660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.577958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.577965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.578255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.578262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.578618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.578625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.578812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.578819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.579007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.579014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.579318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.579325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.579614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.579621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.579911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.579918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.580075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.580082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.580266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.580273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.580589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.580595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.580943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.580950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.581239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.581248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-11-06 14:08:43.581519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-11-06 14:08:43.581526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.581693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.581700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.582020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.582027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.582335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.582342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.582511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.582518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.582828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.582835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.583139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.583146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.583490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.583497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.583805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.583811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.583964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.583971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.584183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.584190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.584475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.584482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.584792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.584799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.584894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.584900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.585162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.585168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.585355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.585365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.585697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.585704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.585983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.585990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.586345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.586352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.586631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.586638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.586953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.586960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.587252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.587259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.587562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.587569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.587853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.587860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.588144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.588151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.588451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.588458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.588762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.588769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.589059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.589066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.589376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.589383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.589688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.589695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.589890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.589897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.590212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.590219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.590399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.590407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.590703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.590711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.590890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.590898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.591221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.591228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.591529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.591536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.591822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.591829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.592141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.592148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.592521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.592528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.592700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.592707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.592995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.593002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.593297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.593304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-11-06 14:08:43.593633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-11-06 14:08:43.593639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.593815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.593822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.594085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.594092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.594380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.594387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.594599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.594605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.594882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.594889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.595177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.595184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.595483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.595490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.595661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.595668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.595943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.595949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.596265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.596272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.596555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.596562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.596874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.596888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.597210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.597217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.597559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.597566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.597844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.597851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.598137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.598144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.598469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.598477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.598808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.598815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.598984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.598992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.599292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.599299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.599609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.599616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.599970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.599977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.600262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.600269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.600594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.600601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.600937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.600943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.601230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.601236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.601552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.601559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.601867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.601874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.602161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.602168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.602384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.602392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.602719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.602726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.603020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.603027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.603353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.603360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.603692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.603699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.603982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.603988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.604275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.604282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.604434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.604441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.604729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.604736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.605021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.605028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.605345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.605352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.605655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.605662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.605946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.605953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-11-06 14:08:43.606234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-11-06 14:08:43.606241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.648 [2024-11-06 14:08:43.606546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.648 [2024-11-06 14:08:43.606554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.648 qpair failed and we were unable to recover it. 00:25:04.648 [2024-11-06 14:08:43.606908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.648 [2024-11-06 14:08:43.606916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.648 qpair failed and we were unable to recover it. 00:25:04.648 [2024-11-06 14:08:43.607200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.648 [2024-11-06 14:08:43.607207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.648 qpair failed and we were unable to recover it. 00:25:04.648 [2024-11-06 14:08:43.607482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.648 [2024-11-06 14:08:43.607489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.648 qpair failed and we were unable to recover it. 00:25:04.648 [2024-11-06 14:08:43.607778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.648 [2024-11-06 14:08:43.607786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.648 qpair failed and we were unable to recover it. 00:25:04.648 [2024-11-06 14:08:43.608091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.648 [2024-11-06 14:08:43.608100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.648 qpair failed and we were unable to recover it. 00:25:04.648 [2024-11-06 14:08:43.608288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.648 [2024-11-06 14:08:43.608295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.648 qpair failed and we were unable to recover it. 00:25:04.648 [2024-11-06 14:08:43.608583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.648 [2024-11-06 14:08:43.608590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.648 qpair failed and we were unable to recover it. 00:25:04.648 [2024-11-06 14:08:43.608921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.648 [2024-11-06 14:08:43.608929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.648 qpair failed and we were unable to recover it. 00:25:04.648 [2024-11-06 14:08:43.609216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.648 [2024-11-06 14:08:43.609223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.648 qpair failed and we were unable to recover it. 00:25:04.648 [2024-11-06 14:08:43.609541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.648 [2024-11-06 14:08:43.609548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.648 qpair failed and we were unable to recover it. 00:25:04.648 [2024-11-06 14:08:43.609886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.648 [2024-11-06 14:08:43.609893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.648 qpair failed and we were unable to recover it. 00:25:04.648 [2024-11-06 14:08:43.610219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.648 [2024-11-06 14:08:43.610226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.648 qpair failed and we were unable to recover it. 00:25:04.648 [2024-11-06 14:08:43.610542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.648 [2024-11-06 14:08:43.610549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.648 qpair failed and we were unable to recover it. 00:25:04.648 [2024-11-06 14:08:43.610846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.648 [2024-11-06 14:08:43.610853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.648 qpair failed and we were unable to recover it. 00:25:04.648 [2024-11-06 14:08:43.611158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.648 [2024-11-06 14:08:43.611166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.648 qpair failed and we were unable to recover it. 00:25:04.648 [2024-11-06 14:08:43.611490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.648 [2024-11-06 14:08:43.611497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.648 qpair failed and we were unable to recover it. 00:25:04.648 [2024-11-06 14:08:43.611813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.648 [2024-11-06 14:08:43.611820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.648 qpair failed and we were unable to recover it. 00:25:04.648 [2024-11-06 14:08:43.612100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.648 [2024-11-06 14:08:43.612107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.648 qpair failed and we were unable to recover it. 00:25:04.648 [2024-11-06 14:08:43.612447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.648 [2024-11-06 14:08:43.612454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.648 qpair failed and we were unable to recover it. 00:25:04.648 [2024-11-06 14:08:43.612752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.648 [2024-11-06 14:08:43.612759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.648 qpair failed and we were unable to recover it. 00:25:04.648 [2024-11-06 14:08:43.613120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.648 [2024-11-06 14:08:43.613127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.648 qpair failed and we were unable to recover it. 00:25:04.648 [2024-11-06 14:08:43.613414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.648 [2024-11-06 14:08:43.613421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.648 qpair failed and we were unable to recover it. 00:25:04.648 [2024-11-06 14:08:43.613744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.648 [2024-11-06 14:08:43.613750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.648 qpair failed and we were unable to recover it. 00:25:04.648 [2024-11-06 14:08:43.614057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.614064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.614367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.614374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.614665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.614672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.614956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.614963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.615230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.615238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.615514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.615521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.615709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.615715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.615992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.615999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.616286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.616293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.616589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.616596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.616794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.616801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.617094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.617101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.617408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.617415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.617677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.617684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.618001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.618008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.618302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.618309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.618570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.618576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.618866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.618872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.619151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.619158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.619358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.619366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.619669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.619676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.619963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.619970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.620265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.620272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.620415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.620421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.620716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.620724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.620908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.620915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.621203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.621209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.621548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.621555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.621910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.621916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.622208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.622215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.622477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.622485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.622823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.622829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.623119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.623125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.623443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.623450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.623783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.623790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.624069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.624076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.624361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.624368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.624663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.649 [2024-11-06 14:08:43.624670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.649 qpair failed and we were unable to recover it. 00:25:04.649 [2024-11-06 14:08:43.624966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.624973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.625303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.625310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.625712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.625719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.626018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.626025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.626343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.626350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.626715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.626721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.626914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.626921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.627280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.627287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.627484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.627491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.627800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.627806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.628114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.628120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.628413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.628420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.628726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.628733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.629099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.629106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.629402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.629409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.629739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.629746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.629968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.629975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.630299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.630307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.630609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.630615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.630912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.630919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.631211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.631218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.631519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.631526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.631686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.631693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.631890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.631897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.632231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.632238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.632553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.632561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.632851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.632860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.633177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.633184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.633488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.633495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.633806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.633812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.634096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.634102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.634399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.634407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.634723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.634730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.635077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.635084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.635388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.635395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.635672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.635679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.635976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.635983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.636266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.636272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.650 qpair failed and we were unable to recover it. 00:25:04.650 [2024-11-06 14:08:43.636564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.650 [2024-11-06 14:08:43.636570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.636764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.636770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.636960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.636966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.637275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.637282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.637628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.637635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.637912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.637919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.638219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.638227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.638405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.638413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.638581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.638588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.638886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.638893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.639189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.639196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.639491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.639498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.639685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.639692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.639877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.639883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.640223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.640230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.640547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.640555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.640846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.640853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.641140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.641147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.641438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.641445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.641737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.641744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.642042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.642050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.642248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.642255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.642289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.642297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.642583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.642590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.642880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.642887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.643178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.643185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.643475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.643482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.643782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.643789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.644097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.644104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.644264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.644272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.644528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.644535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.644818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.644825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.645108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.645115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.645422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.645429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.645729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.645736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.646049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.646056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.646340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.646347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.646636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.646643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.646927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.651 [2024-11-06 14:08:43.646934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.651 qpair failed and we were unable to recover it. 00:25:04.651 [2024-11-06 14:08:43.647298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.652 [2024-11-06 14:08:43.647306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.652 qpair failed and we were unable to recover it. 00:25:04.652 [2024-11-06 14:08:43.647583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.652 [2024-11-06 14:08:43.647589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.652 qpair failed and we were unable to recover it. 00:25:04.652 [2024-11-06 14:08:43.647889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.652 [2024-11-06 14:08:43.647896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.652 qpair failed and we were unable to recover it. 00:25:04.652 [2024-11-06 14:08:43.648063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.652 [2024-11-06 14:08:43.648071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.652 qpair failed and we were unable to recover it. 00:25:04.652 [2024-11-06 14:08:43.648383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.652 [2024-11-06 14:08:43.648391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.652 qpair failed and we were unable to recover it. 00:25:04.652 [2024-11-06 14:08:43.648583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.652 [2024-11-06 14:08:43.648590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.652 qpair failed and we were unable to recover it. 00:25:04.652 [2024-11-06 14:08:43.648880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.652 [2024-11-06 14:08:43.648887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.652 qpair failed and we were unable to recover it. 00:25:04.652 [2024-11-06 14:08:43.649217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.652 [2024-11-06 14:08:43.649223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.652 qpair failed and we were unable to recover it. 00:25:04.652 [2024-11-06 14:08:43.649575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.652 [2024-11-06 14:08:43.649582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.652 qpair failed and we were unable to recover it. 00:25:04.652 [2024-11-06 14:08:43.649887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.652 [2024-11-06 14:08:43.649894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.652 qpair failed and we were unable to recover it. 00:25:04.652 [2024-11-06 14:08:43.650203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.652 [2024-11-06 14:08:43.650210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.652 qpair failed and we were unable to recover it. 00:25:04.652 [2024-11-06 14:08:43.650432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.652 [2024-11-06 14:08:43.650440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.652 qpair failed and we were unable to recover it. 00:25:04.652 [2024-11-06 14:08:43.650751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.652 [2024-11-06 14:08:43.650757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.652 qpair failed and we were unable to recover it. 00:25:04.652 [2024-11-06 14:08:43.651054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.652 [2024-11-06 14:08:43.651061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.652 qpair failed and we were unable to recover it. 00:25:04.652 [2024-11-06 14:08:43.651389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.652 [2024-11-06 14:08:43.651397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.652 qpair failed and we were unable to recover it. 00:25:04.652 [2024-11-06 14:08:43.651681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.652 [2024-11-06 14:08:43.651688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.652 qpair failed and we were unable to recover it. 00:25:04.652 [2024-11-06 14:08:43.651846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.652 [2024-11-06 14:08:43.651855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.652 qpair failed and we were unable to recover it. 00:25:04.652 [2024-11-06 14:08:43.652171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.652 [2024-11-06 14:08:43.652179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.652 qpair failed and we were unable to recover it. 00:25:04.652 [2024-11-06 14:08:43.652483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.652 [2024-11-06 14:08:43.652490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.652 qpair failed and we were unable to recover it. 00:25:04.652 [2024-11-06 14:08:43.652823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.652 [2024-11-06 14:08:43.652830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.652 qpair failed and we were unable to recover it. 00:25:04.652 [2024-11-06 14:08:43.653160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.652 [2024-11-06 14:08:43.653167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.652 qpair failed and we were unable to recover it. 00:25:04.652 [2024-11-06 14:08:43.653359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.652 [2024-11-06 14:08:43.653366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.652 qpair failed and we were unable to recover it. 00:25:04.652 [2024-11-06 14:08:43.653617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.652 [2024-11-06 14:08:43.653623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.652 qpair failed and we were unable to recover it. 00:25:04.652 [2024-11-06 14:08:43.653918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.652 [2024-11-06 14:08:43.653925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.652 qpair failed and we were unable to recover it. 00:25:04.652 [2024-11-06 14:08:43.654227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.652 [2024-11-06 14:08:43.654234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.652 qpair failed and we were unable to recover it. 00:25:04.652 [2024-11-06 14:08:43.654541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.652 [2024-11-06 14:08:43.654548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.652 qpair failed and we were unable to recover it. 00:25:04.652 [2024-11-06 14:08:43.654833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.652 [2024-11-06 14:08:43.654840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.652 qpair failed and we were unable to recover it. 00:25:04.652 [2024-11-06 14:08:43.654993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.652 [2024-11-06 14:08:43.655001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.652 qpair failed and we were unable to recover it. 00:25:04.652 [2024-11-06 14:08:43.655226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.652 [2024-11-06 14:08:43.655233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.652 qpair failed and we were unable to recover it. 00:25:04.652 [2024-11-06 14:08:43.655554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.652 [2024-11-06 14:08:43.655561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.652 qpair failed and we were unable to recover it. 00:25:04.652 [2024-11-06 14:08:43.655726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.652 [2024-11-06 14:08:43.655734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.652 qpair failed and we were unable to recover it. 00:25:04.652 [2024-11-06 14:08:43.656063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.652 [2024-11-06 14:08:43.656070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.652 qpair failed and we were unable to recover it. 00:25:04.652 [2024-11-06 14:08:43.656223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.652 [2024-11-06 14:08:43.656231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.652 qpair failed and we were unable to recover it. 00:25:04.652 [2024-11-06 14:08:43.656522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.652 [2024-11-06 14:08:43.656529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.652 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.656852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.656859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.657142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.657148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.657461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.657468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.657781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.657788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.657979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.657986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.658236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.658250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.658545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.658552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.658868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.658875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.659159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.659166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.659358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.659366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.659645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.659652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.659946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.659952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.660248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.660255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.660540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.660547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.660841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.660848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.661152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.661159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.661460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.661467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.661753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.661760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.662106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.662113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.662264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.662271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.662558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.662565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.662840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.662846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.663011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.663021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.663316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.663323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.663668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.663675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.663960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.663967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.664282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.664289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.664451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.664458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.664789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.664796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.664954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.664960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.665248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.665255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.665396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.665403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.665750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.665756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.665959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.665966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.666254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.666261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.666557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.666564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.666947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.666954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.653 [2024-11-06 14:08:43.667241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.653 [2024-11-06 14:08:43.667251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.653 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.667545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.667551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.667834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.667840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.668230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.668237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.668564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.668571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.668879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.668886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.669154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.669161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.669530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.669537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.669844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.669851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.670132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.670139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.670335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.670342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.670620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.670627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.670922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.670929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.671217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.671224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.671545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.671552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.671868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.671875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.672093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.672100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.672444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.672451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.672736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.672743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.673032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.673039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.673391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.673398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.673681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.673688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.673986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.673993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.674295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.674302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.674616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.674623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.674904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.674913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.675160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.675166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.675477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.675484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.675767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.675773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.676071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.676078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.676376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.676383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.676693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.676699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.677003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.677010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.677308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.677316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.677512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.677519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.677823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.677830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.678026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.678033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.678408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.678415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.678779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.654 [2024-11-06 14:08:43.678786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.654 qpair failed and we were unable to recover it. 00:25:04.654 [2024-11-06 14:08:43.679072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.679078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.679333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.679340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.679647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.679654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.679925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.679932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.680265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.680272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.680625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.680632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.680951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.680958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.681239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.681249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.681572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.681579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.681883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.681890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.682203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.682210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.682509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.682517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.682816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.682823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.683017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.683025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.683363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.683370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.683547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.683553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.683825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.683831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.684109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.684116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.684407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.684414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.684714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.684721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.685007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.685014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.685214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.685221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.685496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.685503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.685808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.685815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.686101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.686108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.686317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.686325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.686621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.686630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.686928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.686935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.687223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.687229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.687521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.687528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.687873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.687879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.688050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.688057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.688350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.688357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.688646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.688652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.688930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.688936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.689229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.689235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.689511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.689518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.689841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.689848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.655 [2024-11-06 14:08:43.690138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.655 [2024-11-06 14:08:43.690145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.655 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.690438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.690445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.690624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.690632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.690931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.690938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.691234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.691241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.691539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.691546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.691831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.691838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.692134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.692141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.692450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.692457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.692786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.692793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.693080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.693087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.693386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.693393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.693706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.693713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.693881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.693889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.694242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.694254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.694529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.694536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.694821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.694827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.695111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.695118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.695440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.695447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.695752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.695759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.696120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.696127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.696296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.696304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.696603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.696610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.696897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.696904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.697230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.697237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.697554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.697562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.697849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.697856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.698174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.698181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.698336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.698345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.698608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.698615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.698844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.698851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.699186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.699192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.699494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.699501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.699803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.699810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.700015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.700022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.700192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.700200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.700493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.700500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.700822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.700829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.701131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.656 [2024-11-06 14:08:43.701138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.656 qpair failed and we were unable to recover it. 00:25:04.656 [2024-11-06 14:08:43.701330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.657 [2024-11-06 14:08:43.701337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.657 qpair failed and we were unable to recover it. 00:25:04.657 [2024-11-06 14:08:43.701646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.657 [2024-11-06 14:08:43.701653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.657 qpair failed and we were unable to recover it. 00:25:04.657 [2024-11-06 14:08:43.701857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.657 [2024-11-06 14:08:43.701863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.657 qpair failed and we were unable to recover it. 00:25:04.657 [2024-11-06 14:08:43.702078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.657 [2024-11-06 14:08:43.702085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.657 qpair failed and we were unable to recover it. 00:25:04.657 [2024-11-06 14:08:43.702433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.657 [2024-11-06 14:08:43.702440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.657 qpair failed and we were unable to recover it. 00:25:04.657 [2024-11-06 14:08:43.702723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.657 [2024-11-06 14:08:43.702730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.657 qpair failed and we were unable to recover it. 00:25:04.657 [2024-11-06 14:08:43.702911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.657 [2024-11-06 14:08:43.702918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.657 qpair failed and we were unable to recover it. 00:25:04.657 [2024-11-06 14:08:43.703108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.657 [2024-11-06 14:08:43.703115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.657 qpair failed and we were unable to recover it. 00:25:04.657 [2024-11-06 14:08:43.703418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.657 [2024-11-06 14:08:43.703425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.657 qpair failed and we were unable to recover it. 00:25:04.657 [2024-11-06 14:08:43.703702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.657 [2024-11-06 14:08:43.703708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.657 qpair failed and we were unable to recover it. 00:25:04.657 [2024-11-06 14:08:43.704015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.657 [2024-11-06 14:08:43.704021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.657 qpair failed and we were unable to recover it. 00:25:04.657 [2024-11-06 14:08:43.704349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.657 [2024-11-06 14:08:43.704357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.657 qpair failed and we were unable to recover it. 00:25:04.657 [2024-11-06 14:08:43.704660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.657 [2024-11-06 14:08:43.704667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.657 qpair failed and we were unable to recover it. 00:25:04.657 [2024-11-06 14:08:43.704957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.657 [2024-11-06 14:08:43.704964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.657 qpair failed and we were unable to recover it. 00:25:04.657 [2024-11-06 14:08:43.705271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.657 [2024-11-06 14:08:43.705278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.657 qpair failed and we were unable to recover it. 00:25:04.657 [2024-11-06 14:08:43.705576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.657 [2024-11-06 14:08:43.705583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.657 qpair failed and we were unable to recover it. 00:25:04.657 [2024-11-06 14:08:43.705951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.657 [2024-11-06 14:08:43.705958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.657 qpair failed and we were unable to recover it. 00:25:04.657 [2024-11-06 14:08:43.706253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.657 [2024-11-06 14:08:43.706260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.657 qpair failed and we were unable to recover it. 00:25:04.657 [2024-11-06 14:08:43.706579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.657 [2024-11-06 14:08:43.706586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.657 qpair failed and we were unable to recover it. 00:25:04.657 [2024-11-06 14:08:43.706919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.657 [2024-11-06 14:08:43.706926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.657 qpair failed and we were unable to recover it. 00:25:04.657 [2024-11-06 14:08:43.707211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.657 [2024-11-06 14:08:43.707217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.657 qpair failed and we were unable to recover it. 00:25:04.657 [2024-11-06 14:08:43.707527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.657 [2024-11-06 14:08:43.707534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.657 qpair failed and we were unable to recover it. 00:25:04.657 [2024-11-06 14:08:43.707726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.657 [2024-11-06 14:08:43.707733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.657 qpair failed and we were unable to recover it. 00:25:04.657 [2024-11-06 14:08:43.708029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.657 [2024-11-06 14:08:43.708036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.657 qpair failed and we were unable to recover it. 00:25:04.657 [2024-11-06 14:08:43.708341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.657 [2024-11-06 14:08:43.708348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.657 qpair failed and we were unable to recover it. 00:25:04.657 [2024-11-06 14:08:43.708658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.657 [2024-11-06 14:08:43.708664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.657 qpair failed and we were unable to recover it. 00:25:04.657 [2024-11-06 14:08:43.708991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.657 [2024-11-06 14:08:43.708997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.657 qpair failed and we were unable to recover it. 00:25:04.657 [2024-11-06 14:08:43.709273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.657 [2024-11-06 14:08:43.709280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.657 qpair failed and we were unable to recover it. 00:25:04.657 [2024-11-06 14:08:43.709565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.657 [2024-11-06 14:08:43.709572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.657 qpair failed and we were unable to recover it. 00:25:04.657 [2024-11-06 14:08:43.709749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.657 [2024-11-06 14:08:43.709757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.657 qpair failed and we were unable to recover it. 00:25:04.657 [2024-11-06 14:08:43.710100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.657 [2024-11-06 14:08:43.710107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.657 qpair failed and we were unable to recover it. 00:25:04.657 [2024-11-06 14:08:43.710392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.657 [2024-11-06 14:08:43.710399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.657 qpair failed and we were unable to recover it. 00:25:04.657 [2024-11-06 14:08:43.710709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.657 [2024-11-06 14:08:43.710716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.657 qpair failed and we were unable to recover it. 00:25:04.657 [2024-11-06 14:08:43.711013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.657 [2024-11-06 14:08:43.711020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.657 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.711178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.711186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.711481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.711488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.711765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.711772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.711974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.711981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.712272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.712279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.712573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.712579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.712879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.712886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.713230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.713237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.713534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.713542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.713821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.713828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.714119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.714126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.714410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.714417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.714692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.714699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.715027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.715034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.715352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.715359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.715576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.715583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.715910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.715917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.716215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.716222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.716513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.716521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.716832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.716840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.717148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.717155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.717453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.717460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.717662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.717669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.717981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.717988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.718336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.718343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.718642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.718648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.718816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.718823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.719166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.719173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.719323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.719330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.719662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.719669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.719864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.719871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.720195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.720203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.720478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.720485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.720635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.720643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.720955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.720962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.721159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.721167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.721513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.721520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.721829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.658 [2024-11-06 14:08:43.721836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.658 qpair failed and we were unable to recover it. 00:25:04.658 [2024-11-06 14:08:43.722023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.722029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.722254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.722261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.722556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.722563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.722856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.722862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.723176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.723182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.723476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.723483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.723803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.723810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.724077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.724084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.724249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.724257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.724544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.724551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.724832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.724839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.725125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.725132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.725413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.725420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.725721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.725728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.726014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.726020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.726359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.726366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.726664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.726670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.726961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.726968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.727266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.727273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.727583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.727589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.727882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.727889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.728252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.728259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.728556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.728563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.728868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.728875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.729211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.729219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.729384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.729392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.729752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.729759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.730039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.730046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.730370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.730378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.730678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.730686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.730985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.730992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.731297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.731304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.731609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.731616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.731921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.731928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.732215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.732223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.732520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.732527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.732824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.732831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.733125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.733133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.659 [2024-11-06 14:08:43.733424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.659 [2024-11-06 14:08:43.733432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.659 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.733733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.733740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.734029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.734037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.734362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.734369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.734601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.734608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.734884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.734891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.735186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.735193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.735493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.735500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.735796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.735803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.736092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.736099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.736489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.736497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.736780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.736787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.737151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.737157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.737459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.737466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.737760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.737767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.737941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.737949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.738224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.738231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.738538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.738546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.738841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.738849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.739143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.739150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.739452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.739460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.739772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.739779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.740057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.740064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.740366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.740373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.740756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.740763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.741047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.741054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.741339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.741347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.741645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.741653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.741952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.741960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.742251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.742259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.742555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.742562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.742739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.742746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.743012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.743018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.743312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.743319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.743522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.743529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.743865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.743872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.744155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.744162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.744539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.744546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.744740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.660 [2024-11-06 14:08:43.744746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.660 qpair failed and we were unable to recover it. 00:25:04.660 [2024-11-06 14:08:43.745073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.661 [2024-11-06 14:08:43.745082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.661 qpair failed and we were unable to recover it. 00:25:04.661 [2024-11-06 14:08:43.745261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.661 [2024-11-06 14:08:43.745269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.661 qpair failed and we were unable to recover it. 00:25:04.661 [2024-11-06 14:08:43.745567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.661 [2024-11-06 14:08:43.745575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.661 qpair failed and we were unable to recover it. 00:25:04.661 [2024-11-06 14:08:43.745902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.661 [2024-11-06 14:08:43.745909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.661 qpair failed and we were unable to recover it. 00:25:04.661 [2024-11-06 14:08:43.746090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.661 [2024-11-06 14:08:43.746097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.661 qpair failed and we were unable to recover it. 00:25:04.661 [2024-11-06 14:08:43.746308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.661 [2024-11-06 14:08:43.746316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.661 qpair failed and we were unable to recover it. 00:25:04.661 [2024-11-06 14:08:43.746656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.661 [2024-11-06 14:08:43.746663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.661 qpair failed and we were unable to recover it. 00:25:04.661 [2024-11-06 14:08:43.747006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.661 [2024-11-06 14:08:43.747013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.661 qpair failed and we were unable to recover it. 00:25:04.661 [2024-11-06 14:08:43.747298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.661 [2024-11-06 14:08:43.747306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.661 qpair failed and we were unable to recover it. 00:25:04.661 [2024-11-06 14:08:43.747614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.661 [2024-11-06 14:08:43.747621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.661 qpair failed and we were unable to recover it. 00:25:04.661 [2024-11-06 14:08:43.747913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.661 [2024-11-06 14:08:43.747920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.661 qpair failed and we were unable to recover it. 00:25:04.661 [2024-11-06 14:08:43.748240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.661 [2024-11-06 14:08:43.748250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.661 qpair failed and we were unable to recover it. 00:25:04.661 [2024-11-06 14:08:43.748537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.661 [2024-11-06 14:08:43.748544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.661 qpair failed and we were unable to recover it. 00:25:04.661 [2024-11-06 14:08:43.748841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.661 [2024-11-06 14:08:43.748848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.661 qpair failed and we were unable to recover it. 00:25:04.661 [2024-11-06 14:08:43.749060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.661 [2024-11-06 14:08:43.749067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.661 qpair failed and we were unable to recover it. 00:25:04.661 [2024-11-06 14:08:43.749392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.661 [2024-11-06 14:08:43.749399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.661 qpair failed and we were unable to recover it. 00:25:04.661 [2024-11-06 14:08:43.749722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.661 [2024-11-06 14:08:43.749729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.661 qpair failed and we were unable to recover it. 00:25:04.661 [2024-11-06 14:08:43.750025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.661 [2024-11-06 14:08:43.750032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.661 qpair failed and we were unable to recover it. 00:25:04.661 [2024-11-06 14:08:43.750402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.661 [2024-11-06 14:08:43.750409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.661 qpair failed and we were unable to recover it. 00:25:04.661 [2024-11-06 14:08:43.750603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.661 [2024-11-06 14:08:43.750610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.661 qpair failed and we were unable to recover it. 00:25:04.661 [2024-11-06 14:08:43.750878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.661 [2024-11-06 14:08:43.750885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.661 qpair failed and we were unable to recover it. 00:25:04.661 [2024-11-06 14:08:43.751178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.661 [2024-11-06 14:08:43.751186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.661 qpair failed and we were unable to recover it. 00:25:04.661 [2024-11-06 14:08:43.751493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.661 [2024-11-06 14:08:43.751500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.661 qpair failed and we were unable to recover it. 00:25:04.661 [2024-11-06 14:08:43.751845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.661 [2024-11-06 14:08:43.751852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.661 qpair failed and we were unable to recover it. 00:25:04.661 [2024-11-06 14:08:43.752137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.661 [2024-11-06 14:08:43.752144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.661 qpair failed and we were unable to recover it. 00:25:04.661 [2024-11-06 14:08:43.752458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.661 [2024-11-06 14:08:43.752465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.661 qpair failed and we were unable to recover it. 00:25:04.661 [2024-11-06 14:08:43.752771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.661 [2024-11-06 14:08:43.752778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.661 qpair failed and we were unable to recover it. 00:25:04.661 [2024-11-06 14:08:43.753082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.661 [2024-11-06 14:08:43.753090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.661 qpair failed and we were unable to recover it. 00:25:04.661 [2024-11-06 14:08:43.753276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.661 [2024-11-06 14:08:43.753283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.661 qpair failed and we were unable to recover it. 00:25:04.661 [2024-11-06 14:08:43.753461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.661 [2024-11-06 14:08:43.753468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.661 qpair failed and we were unable to recover it. 00:25:04.661 [2024-11-06 14:08:43.753750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.661 [2024-11-06 14:08:43.753757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.661 qpair failed and we were unable to recover it. 00:25:04.661 [2024-11-06 14:08:43.754066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.661 [2024-11-06 14:08:43.754074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.661 qpair failed and we were unable to recover it. 00:25:04.661 [2024-11-06 14:08:43.754397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.661 [2024-11-06 14:08:43.754405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.661 qpair failed and we were unable to recover it. 00:25:04.661 [2024-11-06 14:08:43.754711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.661 [2024-11-06 14:08:43.754717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.661 qpair failed and we were unable to recover it. 00:25:04.661 [2024-11-06 14:08:43.755031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.661 [2024-11-06 14:08:43.755038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.661 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.755233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.755240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.755566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.755574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.755741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.755749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.756087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.756095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.756409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.756417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.756701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.756710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.757090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.757097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.757424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.757432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.757760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.757767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.758099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.758106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.758398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.758406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.758718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.758725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.759007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.759014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.759304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.759312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.759610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.759617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.759899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.759906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.760209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.760216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.760515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.760522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.760855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.760862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.761021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.761028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.761335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.761343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.761626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.761633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.761919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.761926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.762234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.762241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.762597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.762604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.762896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.762903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.763238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.763247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.763501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.763507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.763829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.763836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.764122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.764128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.764409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.764416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.764747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.764754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.765038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.765046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.765222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.765230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.765535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.765542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.765827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.765833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.766174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.766181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.766469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.662 [2024-11-06 14:08:43.766477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.662 qpair failed and we were unable to recover it. 00:25:04.662 [2024-11-06 14:08:43.766759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.766766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.767056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.767063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.767280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.767287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.767645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.767651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.767960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.767967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.768257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.768264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.768529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.768535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.768846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.768855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.769138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.769145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.769435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.769442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.769719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.769726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.770022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.770029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.770332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.770340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.770675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.770681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.771026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.771033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.771391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.771398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.771712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.771719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.772027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.772035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.772226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.772232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.772525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.772533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.772826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.772833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.773142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.773149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.773462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.773470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.773751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.773757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.774041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.774048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.774356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.774362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.774713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.774720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.774920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.774927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.775247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.775254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.775560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.775567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.775843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.775850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.776141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.776148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.776348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.776355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.776663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.776669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.776980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.776988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.777374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.777381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.777683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.777691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.777869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.777877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.663 [2024-11-06 14:08:43.778152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.663 [2024-11-06 14:08:43.778159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.663 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.778476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.778483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.778777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.778784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.779096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.779102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.779391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.779398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.779593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.779600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.779891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.779898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.780218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.780224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.780562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.780569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.780852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.780860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.781016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.781023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.781317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.781324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.781694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.781700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.781989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.781996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.782289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.782296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.782592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.782599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.782896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.782903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.783205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.783212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.783510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.783517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.783705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.783711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.784018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.784025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.784330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.784337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.784631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.784637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.784896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.784903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.785235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.785242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.785609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.785616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.785903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.785909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.786218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.786225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.786509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.786517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.786827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.786835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.787134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.787141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.787385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.787392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.787702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.787709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.788015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.788022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.788341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.788348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.788690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.788697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.789004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.789011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.789256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.789264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.789534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.664 [2024-11-06 14:08:43.789540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.664 qpair failed and we were unable to recover it. 00:25:04.664 [2024-11-06 14:08:43.789829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.789835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.790122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.790129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.790448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.790455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.790743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.790750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.791023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.791029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.791335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.791343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.791520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.791528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.791746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.791752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.792082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.792089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.792364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.792371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.792675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.792684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.793015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.793022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.793321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.793328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.793636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.793642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.793929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.793936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.794222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.794229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.794516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.794523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.794737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.794745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.795138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.795145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.795518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.795525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.795813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.795820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.796149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.796156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.796461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.796468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.796634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.796641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.796948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.796955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.797241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.797257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.797540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.797547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.797845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.797852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.798132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.798139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.798457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.798464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.798619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.798626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.798935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.798942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.799228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.799235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.799518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.799525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.799839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.799845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.800141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.800147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.800437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.800445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.800737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.665 [2024-11-06 14:08:43.800746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.665 qpair failed and we were unable to recover it. 00:25:04.665 [2024-11-06 14:08:43.801051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.666 [2024-11-06 14:08:43.801058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.666 qpair failed and we were unable to recover it. 00:25:04.666 [2024-11-06 14:08:43.801362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.666 [2024-11-06 14:08:43.801369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.666 qpair failed and we were unable to recover it. 00:25:04.666 [2024-11-06 14:08:43.801692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.666 [2024-11-06 14:08:43.801699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.666 qpair failed and we were unable to recover it. 00:25:04.666 [2024-11-06 14:08:43.802000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.666 [2024-11-06 14:08:43.802007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.666 qpair failed and we were unable to recover it. 00:25:04.666 [2024-11-06 14:08:43.802310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.666 [2024-11-06 14:08:43.802317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.666 qpair failed and we were unable to recover it. 00:25:04.666 [2024-11-06 14:08:43.802607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.666 [2024-11-06 14:08:43.802613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.666 qpair failed and we were unable to recover it. 00:25:04.666 [2024-11-06 14:08:43.802810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.666 [2024-11-06 14:08:43.802817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.666 qpair failed and we were unable to recover it. 00:25:04.666 [2024-11-06 14:08:43.802994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.666 [2024-11-06 14:08:43.803001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.666 qpair failed and we were unable to recover it. 00:25:04.666 [2024-11-06 14:08:43.803329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.666 [2024-11-06 14:08:43.803336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.666 qpair failed and we were unable to recover it. 00:25:04.666 [2024-11-06 14:08:43.803665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.666 [2024-11-06 14:08:43.803672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.666 qpair failed and we were unable to recover it. 00:25:04.666 [2024-11-06 14:08:43.803960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.666 [2024-11-06 14:08:43.803966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.666 qpair failed and we were unable to recover it. 00:25:04.666 [2024-11-06 14:08:43.804308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.666 [2024-11-06 14:08:43.804315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.666 qpair failed and we were unable to recover it. 00:25:04.666 [2024-11-06 14:08:43.804607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.666 [2024-11-06 14:08:43.804613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.666 qpair failed and we were unable to recover it. 00:25:04.666 [2024-11-06 14:08:43.804919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.666 [2024-11-06 14:08:43.804926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.666 qpair failed and we were unable to recover it. 00:25:04.666 [2024-11-06 14:08:43.805060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.666 [2024-11-06 14:08:43.805068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.666 qpair failed and we were unable to recover it. 00:25:04.666 [2024-11-06 14:08:43.805316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.666 [2024-11-06 14:08:43.805323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.666 qpair failed and we were unable to recover it. 00:25:04.666 [2024-11-06 14:08:43.805672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.666 [2024-11-06 14:08:43.805679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.666 qpair failed and we were unable to recover it. 00:25:04.666 [2024-11-06 14:08:43.805972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.666 [2024-11-06 14:08:43.805979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.666 qpair failed and we were unable to recover it. 00:25:04.666 [2024-11-06 14:08:43.806279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.666 [2024-11-06 14:08:43.806287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.666 qpair failed and we were unable to recover it. 00:25:04.666 [2024-11-06 14:08:43.806634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.666 [2024-11-06 14:08:43.806641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.666 qpair failed and we were unable to recover it. 00:25:04.666 [2024-11-06 14:08:43.806948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.666 [2024-11-06 14:08:43.806955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.666 qpair failed and we were unable to recover it. 00:25:04.666 [2024-11-06 14:08:43.807267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.666 [2024-11-06 14:08:43.807275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.666 qpair failed and we were unable to recover it. 00:25:04.666 [2024-11-06 14:08:43.807578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.666 [2024-11-06 14:08:43.807584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.666 qpair failed and we were unable to recover it. 00:25:04.666 [2024-11-06 14:08:43.807952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.666 [2024-11-06 14:08:43.807959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.666 qpair failed and we were unable to recover it. 00:25:04.666 [2024-11-06 14:08:43.808145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.666 [2024-11-06 14:08:43.808152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.666 qpair failed and we were unable to recover it. 00:25:04.666 [2024-11-06 14:08:43.808441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.666 [2024-11-06 14:08:43.808448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.666 qpair failed and we were unable to recover it. 00:25:04.666 [2024-11-06 14:08:43.808738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.666 [2024-11-06 14:08:43.808745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.666 qpair failed and we were unable to recover it. 00:25:04.666 [2024-11-06 14:08:43.809032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.666 [2024-11-06 14:08:43.809038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.666 qpair failed and we were unable to recover it. 00:25:04.666 [2024-11-06 14:08:43.809379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.666 [2024-11-06 14:08:43.809386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.666 qpair failed and we were unable to recover it. 00:25:04.666 [2024-11-06 14:08:43.809670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.666 [2024-11-06 14:08:43.809677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.666 qpair failed and we were unable to recover it. 00:25:04.666 [2024-11-06 14:08:43.809991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.666 [2024-11-06 14:08:43.809998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.666 qpair failed and we were unable to recover it. 00:25:04.666 [2024-11-06 14:08:43.810285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.666 [2024-11-06 14:08:43.810292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.666 qpair failed and we were unable to recover it. 00:25:04.666 [2024-11-06 14:08:43.810496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.666 [2024-11-06 14:08:43.810503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.666 qpair failed and we were unable to recover it. 00:25:04.666 [2024-11-06 14:08:43.810767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.666 [2024-11-06 14:08:43.810774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.666 qpair failed and we were unable to recover it. 00:25:04.666 [2024-11-06 14:08:43.811081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.666 [2024-11-06 14:08:43.811088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.666 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.811387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.811394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.811683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.811690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.811867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.811874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.812171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.812178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.812485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.812495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.812630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.812638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.812911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.812918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.813227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.813234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.813536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.813543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.813836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.813843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.814126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.814133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.814464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.814471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.814755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.814762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.815054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.815061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.815369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.815376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.815688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.815695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.815975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.815982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.816261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.816268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.816575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.816582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.816878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.816885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.817253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.817260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.817540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.817546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.817824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.817830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.818119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.818125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.818459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.818466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.818754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.818761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.819035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.819043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.819348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.819356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.819666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.819674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.819961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.819968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.820260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.820267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.820567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.820575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.820894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.820902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.821212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.821220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.821507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.821515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.821827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.821835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.822145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.822153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.822332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.667 [2024-11-06 14:08:43.822340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.667 qpair failed and we were unable to recover it. 00:25:04.667 [2024-11-06 14:08:43.822612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.822620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.822817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.822824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.823107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.823114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.823401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.823409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.823588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.823596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.823883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.823890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.824175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.824183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.824493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.824500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.824829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.824837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.825125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.825132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.825490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.825498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.825767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.825774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.826095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.826103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.826394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.826402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.826571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.826578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.826880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.826888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.827214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.827222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.827527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.827535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.827700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.827707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.828027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.828035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.828355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.828362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.828568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.828575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.828869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.828876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.829173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.829180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.829495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.829502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.829777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.829784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.829945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.829953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.830242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.830254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.830540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.830548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.830838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.830845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.831010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.831018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.831398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.831405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.831725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.831732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.832048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.832056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.832387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.832394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.832700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.832707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.833037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.833045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.833324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.668 [2024-11-06 14:08:43.833331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.668 qpair failed and we were unable to recover it. 00:25:04.668 [2024-11-06 14:08:43.833639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.833646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.833996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.834003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.834285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.834293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.834585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.834592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.834877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.834883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.835174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.835181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.835484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.835491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.835772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.835780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.836086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.836095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.836266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.836274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.836532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.836538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.836892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.836899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.837228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.837235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.837588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.837595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.837875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.837882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.838179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.838186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.838388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.838395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.838679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.838686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.838972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.838978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.839165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.839172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.839442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.839450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.839750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.839757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.840085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.840092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.840425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.840432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.840718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.840725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.841058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.841065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.841454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.841462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.841623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.841631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.841918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.841925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.842264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.842271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.842617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.842625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.842916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.842923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.843227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.843235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.843430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.843438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.843706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.843713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.844027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.844033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.844393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.844400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.844438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.844445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.669 [2024-11-06 14:08:43.844725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.669 [2024-11-06 14:08:43.844731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.669 qpair failed and we were unable to recover it. 00:25:04.670 [2024-11-06 14:08:43.845012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.670 [2024-11-06 14:08:43.845020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.670 qpair failed and we were unable to recover it. 00:25:04.670 [2024-11-06 14:08:43.845349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.670 [2024-11-06 14:08:43.845356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.670 qpair failed and we were unable to recover it. 00:25:04.670 [2024-11-06 14:08:43.845695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.670 [2024-11-06 14:08:43.845701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.670 qpair failed and we were unable to recover it. 00:25:04.670 [2024-11-06 14:08:43.845904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.670 [2024-11-06 14:08:43.845910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.670 qpair failed and we were unable to recover it. 00:25:04.670 [2024-11-06 14:08:43.846176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.670 [2024-11-06 14:08:43.846182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.670 qpair failed and we were unable to recover it. 00:25:04.670 [2024-11-06 14:08:43.846377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.670 [2024-11-06 14:08:43.846384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.670 qpair failed and we were unable to recover it. 00:25:04.670 [2024-11-06 14:08:43.846653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.670 [2024-11-06 14:08:43.846660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.670 qpair failed and we were unable to recover it. 00:25:04.670 [2024-11-06 14:08:43.846844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.670 [2024-11-06 14:08:43.846851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.670 qpair failed and we were unable to recover it. 00:25:04.670 [2024-11-06 14:08:43.847169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.670 [2024-11-06 14:08:43.847176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.670 qpair failed and we were unable to recover it. 00:25:04.670 [2024-11-06 14:08:43.847454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.670 [2024-11-06 14:08:43.847464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.670 qpair failed and we were unable to recover it. 00:25:04.670 [2024-11-06 14:08:43.847768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.670 [2024-11-06 14:08:43.847775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.670 qpair failed and we were unable to recover it. 00:25:04.670 [2024-11-06 14:08:43.848080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.670 [2024-11-06 14:08:43.848087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.670 qpair failed and we were unable to recover it. 00:25:04.670 [2024-11-06 14:08:43.848414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.670 [2024-11-06 14:08:43.848421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.670 qpair failed and we were unable to recover it. 00:25:04.670 [2024-11-06 14:08:43.848713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.670 [2024-11-06 14:08:43.848719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.670 qpair failed and we were unable to recover it. 00:25:04.670 [2024-11-06 14:08:43.849008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.670 [2024-11-06 14:08:43.849015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.670 qpair failed and we were unable to recover it. 00:25:04.670 [2024-11-06 14:08:43.849300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.670 [2024-11-06 14:08:43.849307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.670 qpair failed and we were unable to recover it. 00:25:04.670 [2024-11-06 14:08:43.849596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.670 [2024-11-06 14:08:43.849603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.670 qpair failed and we were unable to recover it. 00:25:04.670 [2024-11-06 14:08:43.849897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.670 [2024-11-06 14:08:43.849904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.670 qpair failed and we were unable to recover it. 00:25:04.670 [2024-11-06 14:08:43.850190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.670 [2024-11-06 14:08:43.850197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.670 qpair failed and we were unable to recover it. 00:25:04.670 [2024-11-06 14:08:43.850499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.670 [2024-11-06 14:08:43.850506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.670 qpair failed and we were unable to recover it. 00:25:04.670 [2024-11-06 14:08:43.850806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.670 [2024-11-06 14:08:43.850813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.670 qpair failed and we were unable to recover it. 00:25:04.670 [2024-11-06 14:08:43.851114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.670 [2024-11-06 14:08:43.851121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.670 qpair failed and we were unable to recover it. 00:25:04.670 [2024-11-06 14:08:43.851407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.670 [2024-11-06 14:08:43.851415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.670 qpair failed and we were unable to recover it. 00:25:04.670 [2024-11-06 14:08:43.851708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.670 [2024-11-06 14:08:43.851715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.670 qpair failed and we were unable to recover it. 00:25:04.670 [2024-11-06 14:08:43.851989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.670 [2024-11-06 14:08:43.851996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.670 qpair failed and we were unable to recover it. 00:25:04.670 [2024-11-06 14:08:43.852160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.670 [2024-11-06 14:08:43.852167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.670 qpair failed and we were unable to recover it. 00:25:04.670 [2024-11-06 14:08:43.852457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.670 [2024-11-06 14:08:43.852465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.670 qpair failed and we were unable to recover it. 00:25:04.670 [2024-11-06 14:08:43.852758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.670 [2024-11-06 14:08:43.852765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.670 qpair failed and we were unable to recover it. 00:25:04.670 [2024-11-06 14:08:43.852920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.670 [2024-11-06 14:08:43.852927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.670 qpair failed and we were unable to recover it. 00:25:04.670 [2024-11-06 14:08:43.853196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.670 [2024-11-06 14:08:43.853203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.670 qpair failed and we were unable to recover it. 00:25:04.670 [2024-11-06 14:08:43.853523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.670 [2024-11-06 14:08:43.853530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.670 qpair failed and we were unable to recover it. 00:25:04.670 [2024-11-06 14:08:43.853878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.670 [2024-11-06 14:08:43.853885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.670 qpair failed and we were unable to recover it. 00:25:04.670 [2024-11-06 14:08:43.854087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.670 [2024-11-06 14:08:43.854093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.670 qpair failed and we were unable to recover it. 00:25:04.670 [2024-11-06 14:08:43.854319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.670 [2024-11-06 14:08:43.854326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.670 qpair failed and we were unable to recover it. 00:25:04.670 [2024-11-06 14:08:43.854621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.670 [2024-11-06 14:08:43.854628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.854917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.854923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.855226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.855233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.855376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.855384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.855688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.855695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.855878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.855885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.856154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.856161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.856456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.856463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.856803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.856810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.857197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.857204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.857492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.857499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.857825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.857831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.858115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.858121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.858410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.858417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.858727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.858734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.859057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.859066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.859361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.859368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.859678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.859685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.860017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.860024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.860384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.860391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.860734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.860741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.860900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.860907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.861207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.861214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.861546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.861553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.861828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.861835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.862133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.862140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.862435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.862442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.862768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.862775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.863085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.863092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.863285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.863292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.863598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.863605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.863802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.863809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.863997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.864004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.864328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.864335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.864655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.864662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.864979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.864986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.865271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.865279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.865584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.865590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.671 [2024-11-06 14:08:43.865867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.671 [2024-11-06 14:08:43.865874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.671 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.866145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.866152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.866458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.866465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.866831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.866837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.867019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.867026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.867294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.867301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.867597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.867604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.867759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.867767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.867999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.868006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.868273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.868280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.868433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.868440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.868601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.868608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.868971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.868977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.869266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.869273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.869643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.869650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.869973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.869980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.870177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.870184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.870492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.870502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.870801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.870807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.870968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.870975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.871325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.871332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.871684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.871690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.871980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.871987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.872289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.872296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.872542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.872548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.872857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.872863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.873238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.873252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.873546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.873553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.873875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.873882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.874168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.874174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.874487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.874494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.874789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.874796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.875043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.875049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.875482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.875489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.875787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.875793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.875959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.875966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.876232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.876239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.876437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.876445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.876746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-11-06 14:08:43.876753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-11-06 14:08:43.877033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.877041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.877339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.877346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.877680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.877687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.878023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.878029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.878320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.878327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.878624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.878631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.878924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.878930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.879093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.879100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.879387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.879395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.879580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.879587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.879898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.879905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.880257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.880265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.880557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.880564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.880848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.880855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.881169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.881176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.881477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.881484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.881827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.881833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.882165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.882172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.882456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.882465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.882759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.882767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.882965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.882971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.883314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.883321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.883615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.883621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.883923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.883930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.884216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.884223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.884542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.884550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.884848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.884855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.885191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.885198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.885494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.885501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.885853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.885860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.886138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.886145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.886462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.886469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.886754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.886761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.887082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.887090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.887384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.887392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.887682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.887689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.887975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.887982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.888266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.888273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-11-06 14:08:43.888576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-11-06 14:08:43.888583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.888872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.888879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.889176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.889183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.889502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.889510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.889810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.889817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.890112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.890119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.890451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.890459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.890745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.890752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.891036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.891042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.891333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.891340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.891492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.891499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.891677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.891683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.891992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.891999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.892284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.892291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.892588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.892595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.892885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.892892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.893180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.893187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.893485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.893492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.893781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.893788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.894081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.894087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.894378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.894386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.894694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.894701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.895014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.895021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.895219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.895225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.895413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.895420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.895598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.895605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.895884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.895891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.896213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.896220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.896567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.896575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.896868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.896875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.897172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.897179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.897493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.897501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.897789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.897796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.898070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.898077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.898263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.898270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.898536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.898543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.898826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.898833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.899144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.899151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-11-06 14:08:43.899465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-11-06 14:08:43.899472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.675 [2024-11-06 14:08:43.899775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-11-06 14:08:43.899782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-11-06 14:08:43.900131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-11-06 14:08:43.900138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-11-06 14:08:43.900452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-11-06 14:08:43.900459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-11-06 14:08:43.900745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-11-06 14:08:43.900752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-11-06 14:08:43.901061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-11-06 14:08:43.901067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-11-06 14:08:43.901352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-11-06 14:08:43.901359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-11-06 14:08:43.901667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-11-06 14:08:43.901674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-11-06 14:08:43.902062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-11-06 14:08:43.902069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-11-06 14:08:43.902377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-11-06 14:08:43.902384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-11-06 14:08:43.902704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-11-06 14:08:43.902711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-11-06 14:08:43.903009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-11-06 14:08:43.903016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-11-06 14:08:43.903294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-11-06 14:08:43.903301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-11-06 14:08:43.903599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-11-06 14:08:43.903605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-11-06 14:08:43.903937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-11-06 14:08:43.903943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-11-06 14:08:43.904280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-11-06 14:08:43.904287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-11-06 14:08:43.904596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-11-06 14:08:43.904603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-11-06 14:08:43.904907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-11-06 14:08:43.904914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-11-06 14:08:43.905226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-11-06 14:08:43.905232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-11-06 14:08:43.905560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-11-06 14:08:43.905567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-11-06 14:08:43.905860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-11-06 14:08:43.905867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-11-06 14:08:43.906161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-11-06 14:08:43.906168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-11-06 14:08:43.906459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-11-06 14:08:43.906468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-11-06 14:08:43.906756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-11-06 14:08:43.906763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-11-06 14:08:43.907078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-11-06 14:08:43.907085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-11-06 14:08:43.907381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-11-06 14:08:43.907388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-11-06 14:08:43.907769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-11-06 14:08:43.907776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-11-06 14:08:43.908067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-11-06 14:08:43.908074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-11-06 14:08:43.908344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-11-06 14:08:43.908351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-11-06 14:08:43.908677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-11-06 14:08:43.908684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-11-06 14:08:43.908979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-11-06 14:08:43.908985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-11-06 14:08:43.909276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-11-06 14:08:43.909284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-11-06 14:08:43.909604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-11-06 14:08:43.909610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-11-06 14:08:43.909930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-11-06 14:08:43.909937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-11-06 14:08:43.910108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-11-06 14:08:43.910114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-11-06 14:08:43.910407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-11-06 14:08:43.910414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.676 [2024-11-06 14:08:43.910749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-11-06 14:08:43.910756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-11-06 14:08:43.910918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-11-06 14:08:43.910925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-11-06 14:08:43.911199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-11-06 14:08:43.911206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-11-06 14:08:43.911504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-11-06 14:08:43.911511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-11-06 14:08:43.911815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-11-06 14:08:43.911822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-11-06 14:08:43.911977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-11-06 14:08:43.911984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-11-06 14:08:43.912270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-11-06 14:08:43.912278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-11-06 14:08:43.912460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-11-06 14:08:43.912467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-11-06 14:08:43.912727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-11-06 14:08:43.912734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-11-06 14:08:43.912888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-11-06 14:08:43.912896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-11-06 14:08:43.913208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-11-06 14:08:43.913214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-11-06 14:08:43.913513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-11-06 14:08:43.913520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-11-06 14:08:43.913824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-11-06 14:08:43.913831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-11-06 14:08:43.914031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-11-06 14:08:43.914038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-11-06 14:08:43.914336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-11-06 14:08:43.914344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-11-06 14:08:43.914626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-11-06 14:08:43.914633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-11-06 14:08:43.914923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-11-06 14:08:43.914930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.952 [2024-11-06 14:08:43.915229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.952 [2024-11-06 14:08:43.915237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.952 qpair failed and we were unable to recover it. 00:25:04.952 [2024-11-06 14:08:43.915407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.952 [2024-11-06 14:08:43.915414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.952 qpair failed and we were unable to recover it. 00:25:04.952 [2024-11-06 14:08:43.915719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.952 [2024-11-06 14:08:43.915725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.952 qpair failed and we were unable to recover it. 00:25:04.952 [2024-11-06 14:08:43.916052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.952 [2024-11-06 14:08:43.916060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.952 qpair failed and we were unable to recover it. 00:25:04.952 [2024-11-06 14:08:43.916348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.952 [2024-11-06 14:08:43.916355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.952 qpair failed and we were unable to recover it. 00:25:04.952 [2024-11-06 14:08:43.916648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.952 [2024-11-06 14:08:43.916655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.952 qpair failed and we were unable to recover it. 00:25:04.952 [2024-11-06 14:08:43.916966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.952 [2024-11-06 14:08:43.916973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.952 qpair failed and we were unable to recover it. 00:25:04.952 [2024-11-06 14:08:43.917284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.952 [2024-11-06 14:08:43.917292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.952 qpair failed and we were unable to recover it. 00:25:04.952 [2024-11-06 14:08:43.917588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.952 [2024-11-06 14:08:43.917595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.952 qpair failed and we were unable to recover it. 00:25:04.952 [2024-11-06 14:08:43.917883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.952 [2024-11-06 14:08:43.917891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.952 qpair failed and we were unable to recover it. 00:25:04.952 [2024-11-06 14:08:43.918183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.952 [2024-11-06 14:08:43.918190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.952 qpair failed and we were unable to recover it. 00:25:04.952 [2024-11-06 14:08:43.918504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.952 [2024-11-06 14:08:43.918511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.952 qpair failed and we were unable to recover it. 00:25:04.952 [2024-11-06 14:08:43.918858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.952 [2024-11-06 14:08:43.918865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.952 qpair failed and we were unable to recover it. 00:25:04.952 [2024-11-06 14:08:43.919035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.952 [2024-11-06 14:08:43.919042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.952 qpair failed and we were unable to recover it. 00:25:04.952 [2024-11-06 14:08:43.919226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.919234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.919544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.919551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.919850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.919857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.920156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.920163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.920446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.920453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.920753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.920761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.921049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.921056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.921357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.921364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.921649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.921656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.921940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.921947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.922230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.922237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.922582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.922589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.922873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.922880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.923182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.923189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.923363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.923371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.923725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.923732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.924059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.924065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.924361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.924368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.924708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.924715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.925002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.925009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.925348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.925355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.925734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.925741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.926057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.926065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.926355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.926362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.926665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.926672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.926968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.926975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.927282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.927290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.927612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.927619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.927904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.927911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.928199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.928206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.928482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.928489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.928782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.928789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.929078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.929084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.929431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.929438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.929591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.929599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.929909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.929920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.930120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.930127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.930309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.930316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.953 qpair failed and we were unable to recover it. 00:25:04.953 [2024-11-06 14:08:43.930615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.953 [2024-11-06 14:08:43.930622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.930914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.930920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.931211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.931218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.931564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.931571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.931882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.931888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.932186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.932193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.932493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.932500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.932683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.932690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.933012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.933019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.933311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.933318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.933602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.933609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.933896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.933903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.934235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.934241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.934529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.934535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.934847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.934854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.935159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.935166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.935466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.935473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.935772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.935780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.936077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.936083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.936308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.936315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.936629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.936636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.936935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.936941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.937232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.937238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.937529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.937536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.937862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.937869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.938153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.938160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.938399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.938407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.938787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.938793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.939104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.939111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.939418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.939426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.939710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.939717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.939905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.939912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.940229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.940236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.940430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.940437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.940624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.940631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.940838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.940845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.941187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.941194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.941491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.941499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.954 [2024-11-06 14:08:43.941804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.954 [2024-11-06 14:08:43.941811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.954 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.942117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.942124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.942413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.942420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.942728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.942735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.943034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.943040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.943328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.943335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.943654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.943661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.943955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.943962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.944266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.944273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.944573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.944580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.944885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.944892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.945188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.945194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.945385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.945392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.945701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.945708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.946063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.946070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.946361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.946368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.946660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.946667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.946970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.946977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.947276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.947283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.947577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.947583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.947884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.947890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.948107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.948114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.948481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.948488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.948774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.948781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.949071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.949078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.949401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.949409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.949706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.949714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.950005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.950012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.950338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.950345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.950643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.950650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.950945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.950952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.951249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.951258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.951576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.951583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.951943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.951951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.952113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.952120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.952435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.952442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.952720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.952728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.953024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.953031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.953316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.955 [2024-11-06 14:08:43.953323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.955 qpair failed and we were unable to recover it. 00:25:04.955 [2024-11-06 14:08:43.953708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.953714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.954018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.954025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.954365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.954372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.954655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.954662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.955000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.955007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.955316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.955323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.955625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.955632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.955812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.955819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.956092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.956099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.956357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.956364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.956728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.956735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.957022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.957028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.957286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.957293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.957630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.957637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.957956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.957963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.958255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.958263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.958535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.958542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.958828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.958835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.959133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.959140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.959520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.959527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.959816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.959823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.960117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.960124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.960415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.960422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.960730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.960737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.961039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.961046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.961366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.961374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.961668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.961675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.961978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.961987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.962292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.962299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.962606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.962613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.962927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.962934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.963243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.963253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.963602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.963609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.963975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.963982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.964277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.964284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.964588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.964595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.964738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.964746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.965011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.956 [2024-11-06 14:08:43.965018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.956 qpair failed and we were unable to recover it. 00:25:04.956 [2024-11-06 14:08:43.965348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.957 [2024-11-06 14:08:43.965356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.957 qpair failed and we were unable to recover it. 00:25:04.957 [2024-11-06 14:08:43.965672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.957 [2024-11-06 14:08:43.965679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.957 qpair failed and we were unable to recover it. 00:25:04.957 [2024-11-06 14:08:43.965983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.957 [2024-11-06 14:08:43.965990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.957 qpair failed and we were unable to recover it. 00:25:04.957 [2024-11-06 14:08:43.966306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.957 [2024-11-06 14:08:43.966313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.957 qpair failed and we were unable to recover it. 00:25:04.957 [2024-11-06 14:08:43.966619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.957 [2024-11-06 14:08:43.966626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.957 qpair failed and we were unable to recover it. 00:25:04.957 [2024-11-06 14:08:43.966950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.957 [2024-11-06 14:08:43.966957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.957 qpair failed and we were unable to recover it. 00:25:04.957 [2024-11-06 14:08:43.967243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.957 [2024-11-06 14:08:43.967253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.957 qpair failed and we were unable to recover it. 00:25:04.957 [2024-11-06 14:08:43.967537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.957 [2024-11-06 14:08:43.967544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.957 qpair failed and we were unable to recover it. 00:25:04.957 [2024-11-06 14:08:43.967830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.957 [2024-11-06 14:08:43.967837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.957 qpair failed and we were unable to recover it. 00:25:04.957 [2024-11-06 14:08:43.968135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.957 [2024-11-06 14:08:43.968142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.957 qpair failed and we were unable to recover it. 00:25:04.957 [2024-11-06 14:08:43.968435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.957 [2024-11-06 14:08:43.968442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.957 qpair failed and we were unable to recover it. 00:25:04.957 [2024-11-06 14:08:43.968492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.957 [2024-11-06 14:08:43.968499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.957 qpair failed and we were unable to recover it. 00:25:04.957 [2024-11-06 14:08:43.968839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.957 [2024-11-06 14:08:43.968845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.957 qpair failed and we were unable to recover it. 00:25:04.957 [2024-11-06 14:08:43.969151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.957 [2024-11-06 14:08:43.969157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.957 qpair failed and we were unable to recover it. 00:25:04.957 [2024-11-06 14:08:43.969458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.957 [2024-11-06 14:08:43.969465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.957 qpair failed and we were unable to recover it. 00:25:04.957 [2024-11-06 14:08:43.969710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.957 [2024-11-06 14:08:43.969716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.957 qpair failed and we were unable to recover it. 00:25:04.957 [2024-11-06 14:08:43.970031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.957 [2024-11-06 14:08:43.970038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.957 qpair failed and we were unable to recover it. 00:25:04.957 [2024-11-06 14:08:43.970204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.957 [2024-11-06 14:08:43.970211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.957 qpair failed and we were unable to recover it. 00:25:04.957 [2024-11-06 14:08:43.970494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.957 [2024-11-06 14:08:43.970501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.957 qpair failed and we were unable to recover it. 00:25:04.957 [2024-11-06 14:08:43.970793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.957 [2024-11-06 14:08:43.970800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.957 qpair failed and we were unable to recover it. 00:25:04.957 [2024-11-06 14:08:43.971122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.957 [2024-11-06 14:08:43.971128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.957 qpair failed and we were unable to recover it. 00:25:04.957 [2024-11-06 14:08:43.971463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.957 [2024-11-06 14:08:43.971470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.957 qpair failed and we were unable to recover it. 00:25:04.957 [2024-11-06 14:08:43.971676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.957 [2024-11-06 14:08:43.971684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.957 qpair failed and we were unable to recover it. 00:25:04.957 [2024-11-06 14:08:43.971934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.957 [2024-11-06 14:08:43.971941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.957 qpair failed and we were unable to recover it. 00:25:04.957 [2024-11-06 14:08:43.972140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.957 [2024-11-06 14:08:43.972146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.957 qpair failed and we were unable to recover it. 00:25:04.957 [2024-11-06 14:08:43.972492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.957 [2024-11-06 14:08:43.972499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.957 qpair failed and we were unable to recover it. 00:25:04.957 [2024-11-06 14:08:43.972814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.957 [2024-11-06 14:08:43.972821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.957 qpair failed and we were unable to recover it. 00:25:04.957 [2024-11-06 14:08:43.973112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.957 [2024-11-06 14:08:43.973119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.957 qpair failed and we were unable to recover it. 00:25:04.957 [2024-11-06 14:08:43.973414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.957 [2024-11-06 14:08:43.973421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.957 qpair failed and we were unable to recover it. 00:25:04.957 [2024-11-06 14:08:43.973614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.957 [2024-11-06 14:08:43.973623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.957 qpair failed and we were unable to recover it. 00:25:04.957 [2024-11-06 14:08:43.973955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.957 [2024-11-06 14:08:43.973962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.957 qpair failed and we were unable to recover it. 00:25:04.957 [2024-11-06 14:08:43.974300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.957 [2024-11-06 14:08:43.974307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.957 qpair failed and we were unable to recover it. 00:25:04.957 [2024-11-06 14:08:43.974592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.957 [2024-11-06 14:08:43.974598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.957 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.974888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.974895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.975188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.975194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.975538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.975545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.975831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.975838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.975995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.976003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.976237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.976249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.976594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.976601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.976888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.976895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.977079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.977086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.977281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.977288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.977641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.977648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.977933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.977939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.978273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.978280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.978595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.978602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.978907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.978914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.979080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.979088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.979390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.979397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.979690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.979697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.979985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.979992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.980299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.980306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.980649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.980656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.980942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.980948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.981241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.981255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.981608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.981616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.981914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.981922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.982210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.982217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.982519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.982527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.982816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.982823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.982998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.983005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.983318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.983326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.983702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.983709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.984003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.984010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.984330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.984337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.984629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.984636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.984996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.985003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.985313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.985321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.985615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.985624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.958 [2024-11-06 14:08:43.985937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.958 [2024-11-06 14:08:43.985944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.958 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.986247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.986255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.986538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.986546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.986929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.986936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.987220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.987227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.987524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.987531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.987837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.987844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.988137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.988143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.988461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.988468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.988765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.988772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.988919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.988926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.989208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.989215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.989505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.989512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.989816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.989824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.990124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.990131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.990294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.990302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.990577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.990584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.990975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.990982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.991295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.991303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.991568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.991576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.991736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.991743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.991942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.991950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.992129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.992136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.992404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.992412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.992719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.992726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.993007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.993014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.993309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.993317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.993500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.993507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.993656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.993664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.993961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.993968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.994274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.994282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.994567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.994574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.994868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.994875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.995192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.995198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.995502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.995510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.995808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.995816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.996096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.996103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.996411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.996419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.996726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.959 [2024-11-06 14:08:43.996734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.959 qpair failed and we were unable to recover it. 00:25:04.959 [2024-11-06 14:08:43.997039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:43.997048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:43.997337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:43.997344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:43.997655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:43.997662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:43.997969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:43.997975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:43.998273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:43.998280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:43.998574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:43.998581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:43.998882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:43.998889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:43.999182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:43.999189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:43.999397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:43.999405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:43.999737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:43.999744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:44.000036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:44.000043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:44.000216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:44.000223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:44.000526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:44.000534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:44.000820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:44.000827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:44.001128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:44.001135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:44.001319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:44.001326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:44.001661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:44.001668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:44.001977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:44.001984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:44.002179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:44.002187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:44.002425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:44.002432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:44.002772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:44.002779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:44.003061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:44.003068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:44.003361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:44.003368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:44.003668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:44.003675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:44.003960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:44.003967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:44.004284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:44.004291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:44.004572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:44.004579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:44.004884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:44.004891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:44.005180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:44.005187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:44.005487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:44.005494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:44.005839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:44.005845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:44.005997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:44.006004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:44.006302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:44.006309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:44.006622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:44.006629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:44.006969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:44.006976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:44.007272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:44.007279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:44.007576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:44.007583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:44.007877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:44.007884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.960 qpair failed and we were unable to recover it. 00:25:04.960 [2024-11-06 14:08:44.008212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.960 [2024-11-06 14:08:44.008219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.961 qpair failed and we were unable to recover it. 00:25:04.961 [2024-11-06 14:08:44.008518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.961 [2024-11-06 14:08:44.008525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.961 qpair failed and we were unable to recover it. 00:25:04.961 [2024-11-06 14:08:44.008812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.961 [2024-11-06 14:08:44.008820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.961 qpair failed and we were unable to recover it. 00:25:04.961 [2024-11-06 14:08:44.009110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.961 [2024-11-06 14:08:44.009117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.961 qpair failed and we were unable to recover it. 00:25:04.961 [2024-11-06 14:08:44.009363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.961 [2024-11-06 14:08:44.009370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.961 qpair failed and we were unable to recover it. 00:25:04.961 [2024-11-06 14:08:44.009661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.961 [2024-11-06 14:08:44.009668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.961 qpair failed and we were unable to recover it. 00:25:04.961 [2024-11-06 14:08:44.009980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.961 [2024-11-06 14:08:44.009987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.961 qpair failed and we were unable to recover it. 00:25:04.961 [2024-11-06 14:08:44.010294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.961 [2024-11-06 14:08:44.010301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.961 qpair failed and we were unable to recover it. 00:25:04.961 [2024-11-06 14:08:44.010589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.961 [2024-11-06 14:08:44.010596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.961 qpair failed and we were unable to recover it. 00:25:04.961 [2024-11-06 14:08:44.010876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.961 [2024-11-06 14:08:44.010883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.961 qpair failed and we were unable to recover it. 00:25:04.961 [2024-11-06 14:08:44.011165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.961 [2024-11-06 14:08:44.011173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.961 qpair failed and we were unable to recover it. 00:25:04.961 [2024-11-06 14:08:44.011474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.961 [2024-11-06 14:08:44.011481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.961 qpair failed and we were unable to recover it. 00:25:04.961 [2024-11-06 14:08:44.011821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.961 [2024-11-06 14:08:44.011828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.961 qpair failed and we were unable to recover it. 00:25:04.961 [2024-11-06 14:08:44.012124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.961 [2024-11-06 14:08:44.012130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.961 qpair failed and we were unable to recover it. 00:25:04.961 [2024-11-06 14:08:44.012420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.961 [2024-11-06 14:08:44.012427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.961 qpair failed and we were unable to recover it. 00:25:04.961 [2024-11-06 14:08:44.012728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.961 [2024-11-06 14:08:44.012735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.961 qpair failed and we were unable to recover it. 00:25:04.961 [2024-11-06 14:08:44.013031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.961 [2024-11-06 14:08:44.013038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.961 qpair failed and we were unable to recover it. 00:25:04.961 [2024-11-06 14:08:44.013363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.961 [2024-11-06 14:08:44.013370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.961 qpair failed and we were unable to recover it. 00:25:04.961 [2024-11-06 14:08:44.013667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.961 [2024-11-06 14:08:44.013674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.961 qpair failed and we were unable to recover it. 00:25:04.961 [2024-11-06 14:08:44.013879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.961 [2024-11-06 14:08:44.013886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.961 qpair failed and we were unable to recover it. 00:25:04.961 [2024-11-06 14:08:44.014158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.961 [2024-11-06 14:08:44.014165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.961 qpair failed and we were unable to recover it. 00:25:04.961 [2024-11-06 14:08:44.014457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.961 [2024-11-06 14:08:44.014464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.961 qpair failed and we were unable to recover it. 00:25:04.961 [2024-11-06 14:08:44.014789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.961 [2024-11-06 14:08:44.014796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.961 qpair failed and we were unable to recover it. 00:25:04.961 [2024-11-06 14:08:44.015081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.961 [2024-11-06 14:08:44.015088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.961 qpair failed and we were unable to recover it. 00:25:04.961 [2024-11-06 14:08:44.015433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.961 [2024-11-06 14:08:44.015440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.961 qpair failed and we were unable to recover it. 00:25:04.961 [2024-11-06 14:08:44.015624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.961 [2024-11-06 14:08:44.015631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.961 qpair failed and we were unable to recover it. 00:25:04.961 [2024-11-06 14:08:44.015921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.961 [2024-11-06 14:08:44.015927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.961 qpair failed and we were unable to recover it. 00:25:04.961 [2024-11-06 14:08:44.016237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.961 [2024-11-06 14:08:44.016246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.961 qpair failed and we were unable to recover it. 00:25:04.961 [2024-11-06 14:08:44.016539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.961 [2024-11-06 14:08:44.016546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.961 qpair failed and we were unable to recover it. 00:25:04.961 [2024-11-06 14:08:44.016833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.961 [2024-11-06 14:08:44.016840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.961 qpair failed and we were unable to recover it. 00:25:04.961 [2024-11-06 14:08:44.017130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.961 [2024-11-06 14:08:44.017137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.961 qpair failed and we were unable to recover it. 00:25:04.961 [2024-11-06 14:08:44.017458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.961 [2024-11-06 14:08:44.017465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.961 qpair failed and we were unable to recover it. 00:25:04.961 [2024-11-06 14:08:44.017797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.961 [2024-11-06 14:08:44.017803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.961 qpair failed and we were unable to recover it. 00:25:04.961 [2024-11-06 14:08:44.018088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.961 [2024-11-06 14:08:44.018094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.961 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.018389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.018396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.018707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.018713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.018889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.018896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.019173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.019179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.019467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.019474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.019806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.019813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.020094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.020101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.020401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.020409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.020708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.020716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.021002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.021008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.021304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.021311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.021665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.021672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.021962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.021969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.022348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.022355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.022543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.022550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.022725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.022732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.023045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.023052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.023336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.023343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.023677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.023684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.024003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.024010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.024184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.024190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.024457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.024464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.024785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.024792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.025075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.025082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.025451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.025459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.025795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.025802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.026090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.026097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.026399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.026407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.026705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.026712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.027013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.027020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.027365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.027372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.027706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.027713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.028018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.028024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.028303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.028310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.028631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.028637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.028810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.028817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.029183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.029190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.029489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.962 [2024-11-06 14:08:44.029496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.962 qpair failed and we were unable to recover it. 00:25:04.962 [2024-11-06 14:08:44.029779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.029786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.030084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.030091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.030392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.030399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.030568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.030575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.030881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.030888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.031070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.031077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.031383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.031390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.031683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.031690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.031972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.031979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.032265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.032272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.032436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.032445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.032782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.032788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.033130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.033137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.033323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.033330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.033641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.033648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.033950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.033957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.034240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.034250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.034608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.034615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.034917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.034924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.035208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.035215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.035603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.035610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.035923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.035931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.036222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.036229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.036546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.036553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.036830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.036838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.037017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.037025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.037344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.037351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.037642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.037649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.037964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.037971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.038255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.038262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.038560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.038567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.038873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.038880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.039047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.039055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.039378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.039385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.039684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.039691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.039982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.039989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.040140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.040148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.040448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.040456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.963 qpair failed and we were unable to recover it. 00:25:04.963 [2024-11-06 14:08:44.040629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.963 [2024-11-06 14:08:44.040635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.040925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.040932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.041264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.041271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.041540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.041547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.041858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.041865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.042167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.042174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.042362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.042369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.042676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.042683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.042883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.042890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.043066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.043073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.043381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.043388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.043688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.043695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.044030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.044040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.044369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.044377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.044581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.044589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.044910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.044917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.045252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.045260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.045539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.045545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.045833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.045840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.046126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.046133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.046416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.046423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.046721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.046728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.046907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.046914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.047183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.047190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.047555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.047562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.047891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.047898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.048201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.048209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.048521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.048528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.048695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.048702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.048997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.049004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.049289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.049296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.049613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.049620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.050040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.050047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.050347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.050354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.050703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.050709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.050995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.051002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.051284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.051291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.051593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.051599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.051940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.964 [2024-11-06 14:08:44.051947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.964 qpair failed and we were unable to recover it. 00:25:04.964 [2024-11-06 14:08:44.052233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.052240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.052527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.052534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.052901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.052908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.053281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.053287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.053575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.053582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.053888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.053895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.054230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.054237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.054546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.054553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.054854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.054861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.055139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.055146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.055324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.055331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.055592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.055599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.055900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.055907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.056195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.056204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.056493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.056500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.056787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.056794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.057088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.057095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.057404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.057412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.057762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.057769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.058080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.058087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.058321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.058328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.058660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.058667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.059015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.059021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.059336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.059343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.059664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.059671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.059959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.059966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.060254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.060262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.060545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.060552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.060859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.060866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.061155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.061162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.061524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.061531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.061842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.061849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.062149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.062155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.062466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.062473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.062773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.062780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.063082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.063089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.063375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.063383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.063701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.965 [2024-11-06 14:08:44.063708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.965 qpair failed and we were unable to recover it. 00:25:04.965 [2024-11-06 14:08:44.063907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.966 [2024-11-06 14:08:44.063914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.966 qpair failed and we were unable to recover it. 00:25:04.966 [2024-11-06 14:08:44.064223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.966 [2024-11-06 14:08:44.064229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.966 qpair failed and we were unable to recover it. 00:25:04.966 [2024-11-06 14:08:44.064602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.966 [2024-11-06 14:08:44.064610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.966 qpair failed and we were unable to recover it. 00:25:04.966 [2024-11-06 14:08:44.064901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.966 [2024-11-06 14:08:44.064908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.966 qpair failed and we were unable to recover it. 00:25:04.966 [2024-11-06 14:08:44.065213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.966 [2024-11-06 14:08:44.065219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.966 qpair failed and we were unable to recover it. 00:25:04.966 [2024-11-06 14:08:44.065370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.966 [2024-11-06 14:08:44.065378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.966 qpair failed and we were unable to recover it. 00:25:04.966 [2024-11-06 14:08:44.065570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.966 [2024-11-06 14:08:44.065577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.966 qpair failed and we were unable to recover it. 00:25:04.966 [2024-11-06 14:08:44.065931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.966 [2024-11-06 14:08:44.065937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.966 qpair failed and we were unable to recover it. 00:25:04.966 [2024-11-06 14:08:44.066235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.966 [2024-11-06 14:08:44.066242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.966 qpair failed and we were unable to recover it. 00:25:04.966 [2024-11-06 14:08:44.066610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.966 [2024-11-06 14:08:44.066618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.966 qpair failed and we were unable to recover it. 00:25:04.966 [2024-11-06 14:08:44.066922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.966 [2024-11-06 14:08:44.066929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.966 qpair failed and we were unable to recover it. 00:25:04.966 [2024-11-06 14:08:44.067117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.966 [2024-11-06 14:08:44.067124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.966 qpair failed and we were unable to recover it. 00:25:04.966 [2024-11-06 14:08:44.067411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.966 [2024-11-06 14:08:44.067418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.966 qpair failed and we were unable to recover it. 00:25:04.966 [2024-11-06 14:08:44.067734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.966 [2024-11-06 14:08:44.067741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.966 qpair failed and we were unable to recover it. 00:25:04.966 [2024-11-06 14:08:44.068038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.966 [2024-11-06 14:08:44.068045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.966 qpair failed and we were unable to recover it. 00:25:04.966 [2024-11-06 14:08:44.068364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.966 [2024-11-06 14:08:44.068373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.966 qpair failed and we were unable to recover it. 00:25:04.966 [2024-11-06 14:08:44.068724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.966 [2024-11-06 14:08:44.068730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.966 qpair failed and we were unable to recover it. 00:25:04.966 [2024-11-06 14:08:44.069035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.966 [2024-11-06 14:08:44.069042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.966 qpair failed and we were unable to recover it. 00:25:04.966 [2024-11-06 14:08:44.069333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.966 [2024-11-06 14:08:44.069340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.966 qpair failed and we were unable to recover it. 00:25:04.966 [2024-11-06 14:08:44.069665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.966 [2024-11-06 14:08:44.069672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.966 qpair failed and we were unable to recover it. 00:25:04.966 [2024-11-06 14:08:44.069926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.966 [2024-11-06 14:08:44.069934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.966 qpair failed and we were unable to recover it. 00:25:04.966 [2024-11-06 14:08:44.070234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.966 [2024-11-06 14:08:44.070241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.966 qpair failed and we were unable to recover it. 00:25:04.966 [2024-11-06 14:08:44.070559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.966 [2024-11-06 14:08:44.070566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.966 qpair failed and we were unable to recover it. 00:25:04.966 [2024-11-06 14:08:44.070852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.966 [2024-11-06 14:08:44.070858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.966 qpair failed and we were unable to recover it. 00:25:04.966 [2024-11-06 14:08:44.071146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.966 [2024-11-06 14:08:44.071153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.966 qpair failed and we were unable to recover it. 00:25:04.966 [2024-11-06 14:08:44.071362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.966 [2024-11-06 14:08:44.071369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.966 qpair failed and we were unable to recover it. 00:25:04.966 [2024-11-06 14:08:44.071672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.966 [2024-11-06 14:08:44.071678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.966 qpair failed and we were unable to recover it. 00:25:04.966 [2024-11-06 14:08:44.071953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.966 [2024-11-06 14:08:44.071960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.966 qpair failed and we were unable to recover it. 00:25:04.966 [2024-11-06 14:08:44.072268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.966 [2024-11-06 14:08:44.072275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.966 qpair failed and we were unable to recover it. 00:25:04.966 [2024-11-06 14:08:44.072580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.966 [2024-11-06 14:08:44.072587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.966 qpair failed and we were unable to recover it. 00:25:04.966 [2024-11-06 14:08:44.072881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.966 [2024-11-06 14:08:44.072888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.966 qpair failed and we were unable to recover it. 00:25:04.966 [2024-11-06 14:08:44.073176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.966 [2024-11-06 14:08:44.073183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.966 qpair failed and we were unable to recover it. 00:25:04.966 [2024-11-06 14:08:44.073370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.966 [2024-11-06 14:08:44.073377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.966 qpair failed and we were unable to recover it. 00:25:04.966 [2024-11-06 14:08:44.073674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.966 [2024-11-06 14:08:44.073681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.966 qpair failed and we were unable to recover it. 00:25:04.966 [2024-11-06 14:08:44.073986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.073993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.074282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.074289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.074583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.074590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.074747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.074754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.075060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.075067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.075236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.075248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.075534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.075541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.075835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.075842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.076019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.076026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.076298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.076305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.076595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.076602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.076889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.076896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.077184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.077190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.077463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.077470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.077778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.077786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.078085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.078092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.078391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.078398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.078604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.078611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.079009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.079016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.079323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.079331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.079626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.079633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.079927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.079936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.080241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.080251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.080606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.080613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.080895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.080902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.081193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.081201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.081561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.081568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.081850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.081857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.082149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.082155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.082475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.082482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.082652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.082660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.082997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.083003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.083291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.083298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.083595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.083601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.083810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.083816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.084092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.084099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.084404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.084411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.084628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.084634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.967 qpair failed and we were unable to recover it. 00:25:04.967 [2024-11-06 14:08:44.084937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.967 [2024-11-06 14:08:44.084944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.085239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.085248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.085542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.085548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.085732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.085739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.086046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.086053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.086353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.086360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.086566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.086573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.086860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.086867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.087155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.087161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.087475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.087482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.087781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.087788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.088094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.088101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.088407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.088415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.088698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.088705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.089024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.089031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.089327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.089334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.089639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.089646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.089937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.089944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.090280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.090287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.090587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.090594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.090771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.090778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.091075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.091082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.091373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.091380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.091718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.091727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.092008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.092016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.092305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.092312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.092607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.092614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.093004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.093011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.093296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.093303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.093608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.093615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.093933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.093940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.094241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.094251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.094538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.094544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.094834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.094840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.095114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.095121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.095487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.095494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.095802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.095809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.096117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.096124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.968 [2024-11-06 14:08:44.096435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.968 [2024-11-06 14:08:44.096443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.968 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.096814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.096821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.097120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.097128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.097407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.097415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.097729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.097737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.098096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.098103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.098408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.098415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.098727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.098734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.098937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.098944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.099280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.099288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.099594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.099601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.099941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.099948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.100148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.100157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.100328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.100336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.100546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.100552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.100877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.100885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.101055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.101062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.101345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.101352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.101680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.101687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.101980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.101988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.102275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.102283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.102641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.102648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.102933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.102940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.103236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.103253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.103593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.103600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.103906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.103913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.104234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.104241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.104553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.104560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.104846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.104853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.105150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.105157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.105453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.105460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.105788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.105795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.106082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.106089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.106347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.106354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.106609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.106616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.106814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.106821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.107085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.107092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.107403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.107410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.107691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.969 [2024-11-06 14:08:44.107698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.969 qpair failed and we were unable to recover it. 00:25:04.969 [2024-11-06 14:08:44.108007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.970 [2024-11-06 14:08:44.108014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.970 qpair failed and we were unable to recover it. 00:25:04.970 [2024-11-06 14:08:44.108216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.970 [2024-11-06 14:08:44.108223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.970 qpair failed and we were unable to recover it. 00:25:04.970 [2024-11-06 14:08:44.108557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.970 [2024-11-06 14:08:44.108564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.970 qpair failed and we were unable to recover it. 00:25:04.970 [2024-11-06 14:08:44.108872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.970 [2024-11-06 14:08:44.108879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.970 qpair failed and we were unable to recover it. 00:25:04.970 [2024-11-06 14:08:44.109204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.970 [2024-11-06 14:08:44.109210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.970 qpair failed and we were unable to recover it. 00:25:04.970 [2024-11-06 14:08:44.109516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.970 [2024-11-06 14:08:44.109523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.970 qpair failed and we were unable to recover it. 00:25:04.970 [2024-11-06 14:08:44.109675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.970 [2024-11-06 14:08:44.109682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.970 qpair failed and we were unable to recover it. 00:25:04.970 [2024-11-06 14:08:44.110010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.970 [2024-11-06 14:08:44.110017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.970 qpair failed and we were unable to recover it. 00:25:04.970 [2024-11-06 14:08:44.110311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.970 [2024-11-06 14:08:44.110318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.970 qpair failed and we were unable to recover it. 00:25:04.970 [2024-11-06 14:08:44.110637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.970 [2024-11-06 14:08:44.110644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.970 qpair failed and we were unable to recover it. 00:25:04.970 [2024-11-06 14:08:44.110959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.970 [2024-11-06 14:08:44.110966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.970 qpair failed and we were unable to recover it. 00:25:04.970 [2024-11-06 14:08:44.111317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.970 [2024-11-06 14:08:44.111324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.970 qpair failed and we were unable to recover it. 00:25:04.970 [2024-11-06 14:08:44.111634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.970 [2024-11-06 14:08:44.111641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.970 qpair failed and we were unable to recover it. 00:25:04.970 [2024-11-06 14:08:44.111817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.970 [2024-11-06 14:08:44.111826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.970 qpair failed and we were unable to recover it. 00:25:04.970 [2024-11-06 14:08:44.112185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.970 [2024-11-06 14:08:44.112192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.970 qpair failed and we were unable to recover it. 00:25:04.970 [2024-11-06 14:08:44.112455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.970 [2024-11-06 14:08:44.112462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.970 qpair failed and we were unable to recover it. 00:25:04.970 [2024-11-06 14:08:44.112671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.970 [2024-11-06 14:08:44.112678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.970 qpair failed and we were unable to recover it. 00:25:04.970 [2024-11-06 14:08:44.112930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.970 [2024-11-06 14:08:44.112936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.970 qpair failed and we were unable to recover it. 00:25:04.970 [2024-11-06 14:08:44.113250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.970 [2024-11-06 14:08:44.113257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.970 qpair failed and we were unable to recover it. 00:25:04.970 [2024-11-06 14:08:44.113552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.970 [2024-11-06 14:08:44.113559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.970 qpair failed and we were unable to recover it. 00:25:04.970 [2024-11-06 14:08:44.113851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.970 [2024-11-06 14:08:44.113858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.970 qpair failed and we were unable to recover it. 00:25:04.970 [2024-11-06 14:08:44.114155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.970 [2024-11-06 14:08:44.114161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.970 qpair failed and we were unable to recover it. 00:25:04.970 [2024-11-06 14:08:44.114462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.970 [2024-11-06 14:08:44.114469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.970 qpair failed and we were unable to recover it. 00:25:04.970 [2024-11-06 14:08:44.114775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.970 [2024-11-06 14:08:44.114782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.970 qpair failed and we were unable to recover it. 00:25:04.970 [2024-11-06 14:08:44.115082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.970 [2024-11-06 14:08:44.115088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.970 qpair failed and we were unable to recover it. 00:25:04.970 [2024-11-06 14:08:44.115373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.970 [2024-11-06 14:08:44.115381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.970 qpair failed and we were unable to recover it. 00:25:04.970 [2024-11-06 14:08:44.115695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.970 [2024-11-06 14:08:44.115702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.970 qpair failed and we were unable to recover it. 00:25:04.970 [2024-11-06 14:08:44.115989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.970 [2024-11-06 14:08:44.115996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.970 qpair failed and we were unable to recover it. 00:25:04.970 [2024-11-06 14:08:44.116150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.970 [2024-11-06 14:08:44.116158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.970 qpair failed and we were unable to recover it. 00:25:04.970 [2024-11-06 14:08:44.116465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.970 [2024-11-06 14:08:44.116472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.970 qpair failed and we were unable to recover it. 00:25:04.970 [2024-11-06 14:08:44.116805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.970 [2024-11-06 14:08:44.116812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.970 qpair failed and we were unable to recover it. 00:25:04.970 [2024-11-06 14:08:44.117119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.970 [2024-11-06 14:08:44.117126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.970 qpair failed and we were unable to recover it. 00:25:04.970 [2024-11-06 14:08:44.117324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.970 [2024-11-06 14:08:44.117332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.970 qpair failed and we were unable to recover it. 00:25:04.970 [2024-11-06 14:08:44.117636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.970 [2024-11-06 14:08:44.117642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.970 qpair failed and we were unable to recover it. 00:25:04.970 [2024-11-06 14:08:44.117927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.970 [2024-11-06 14:08:44.117934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.118252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.118259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.118648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.118654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.118949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.118955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.119242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.119254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.119543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.119549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.119832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.119839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.119989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.119997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.120178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.120184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.120480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.120487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.120796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.120802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.121092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.121099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.121388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.121395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.121701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.121708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.122018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.122025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.122325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.122332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.122604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.122611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.122909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.122916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.123216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.123223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.123495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.123504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.123795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.123802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.124119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.124126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.124413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.124420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.124729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.124736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.124933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.124940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.125233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.125240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.125624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.125631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.125928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.125934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.126232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.126238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.126554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.126561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.126861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.126868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.127161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.127168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.127455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.127462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.127793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.127800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.128120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.128127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.128418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.128425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.128733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.128739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.129017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.129024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.971 [2024-11-06 14:08:44.129342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.971 [2024-11-06 14:08:44.129349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.971 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.129635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.129642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.129959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.129966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.130260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.130267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.130621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.130628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.130914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.130921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.131205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.131212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.131371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.131379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.131722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.131728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.132012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.132019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.132312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.132319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.132644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.132651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.132936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.132943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.133250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.133257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.133420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.133427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.133705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.133712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.134015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.134022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.134310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.134317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.134718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.134725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.135014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.135020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.135312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.135319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.135603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.135611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.135986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.135993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.136177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.136185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.136477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.136484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.136786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.136792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.136988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.136995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.137322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.137329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.137632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.137638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.137843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.137850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.138142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.138149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.138496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.138503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.138789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.138796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.139085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.139092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.139290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.139298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.139622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.139629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.139938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.139944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.140287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.140294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.140630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.972 [2024-11-06 14:08:44.140637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.972 qpair failed and we were unable to recover it. 00:25:04.972 [2024-11-06 14:08:44.140804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.140810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.141002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.141009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.141340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.141348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.141655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.141662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.141963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.141970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.142286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.142293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.142586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.142593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.142901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.142907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.143205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.143212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.143423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.143430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.143752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.143759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.144053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.144060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.144364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.144371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.144667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.144674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.144971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.144978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.145291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.145298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.145662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.145669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.145883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.145889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.146206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.146213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.146520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.146527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.146829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.146836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.147037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.147044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.147352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.147361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.147644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.147651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.147949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.147956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.148148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.148154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.148440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.148447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.148730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.148736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.149046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.149053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.149349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.149356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.149658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.149665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.149953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.149960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.150242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.150251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.150609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.150616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.150930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.150937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.151221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.151228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.151532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.151540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.151882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.151888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.973 qpair failed and we were unable to recover it. 00:25:04.973 [2024-11-06 14:08:44.152171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.973 [2024-11-06 14:08:44.152178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.152519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.152526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.152861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.152868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.153155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.153163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.153347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.153355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.153671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.153678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.153965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.153972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.154274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.154281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.154587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.154594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.154902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.154909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.155242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.155253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.155629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.155636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.155980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.155986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.156270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.156277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.156578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.156586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.156873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.156880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.157053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.157060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.157349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.157356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.157674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.157681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.157981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.157987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.158293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.158300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.158593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.158600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.158886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.158893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.159231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.159238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.159530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.159538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.159839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.159846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.160136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.160144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.160331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.160339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.160525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.160532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.160814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.160821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.161093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.161100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.161463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.161470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.161763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.161770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.162064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.162070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.162359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.162366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.162661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.162668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.162973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.162980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.163264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.163271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.163591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.974 [2024-11-06 14:08:44.163598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.974 qpair failed and we were unable to recover it. 00:25:04.974 [2024-11-06 14:08:44.163885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.975 [2024-11-06 14:08:44.163892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.975 qpair failed and we were unable to recover it. 00:25:04.975 [2024-11-06 14:08:44.164081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.975 [2024-11-06 14:08:44.164089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.975 qpair failed and we were unable to recover it. 00:25:04.975 [2024-11-06 14:08:44.164417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.975 [2024-11-06 14:08:44.164425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.975 qpair failed and we were unable to recover it. 00:25:04.975 [2024-11-06 14:08:44.164625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.975 [2024-11-06 14:08:44.164632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.975 qpair failed and we were unable to recover it. 00:25:04.975 [2024-11-06 14:08:44.164863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.975 [2024-11-06 14:08:44.164870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.975 qpair failed and we were unable to recover it. 00:25:04.975 [2024-11-06 14:08:44.165246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.975 [2024-11-06 14:08:44.165253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.975 qpair failed and we were unable to recover it. 00:25:04.975 [2024-11-06 14:08:44.165564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.975 [2024-11-06 14:08:44.165571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.975 qpair failed and we were unable to recover it. 00:25:04.975 [2024-11-06 14:08:44.165796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.975 [2024-11-06 14:08:44.165803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.975 qpair failed and we were unable to recover it. 00:25:04.975 [2024-11-06 14:08:44.166117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.975 [2024-11-06 14:08:44.166124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.975 qpair failed and we were unable to recover it. 00:25:04.975 [2024-11-06 14:08:44.166509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.975 [2024-11-06 14:08:44.166516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.975 qpair failed and we were unable to recover it. 00:25:04.975 [2024-11-06 14:08:44.166871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.975 [2024-11-06 14:08:44.166878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.975 qpair failed and we were unable to recover it. 00:25:04.975 [2024-11-06 14:08:44.167047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.975 [2024-11-06 14:08:44.167054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.975 qpair failed and we were unable to recover it. 00:25:04.975 [2024-11-06 14:08:44.167347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.975 [2024-11-06 14:08:44.167354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.975 qpair failed and we were unable to recover it. 00:25:04.975 [2024-11-06 14:08:44.167676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.975 [2024-11-06 14:08:44.167683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.975 qpair failed and we were unable to recover it. 00:25:04.975 [2024-11-06 14:08:44.167971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.975 [2024-11-06 14:08:44.167978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.975 qpair failed and we were unable to recover it. 00:25:04.975 [2024-11-06 14:08:44.168259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.975 [2024-11-06 14:08:44.168266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.975 qpair failed and we were unable to recover it. 00:25:04.975 [2024-11-06 14:08:44.168521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.975 [2024-11-06 14:08:44.168528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.975 qpair failed and we were unable to recover it. 00:25:04.975 [2024-11-06 14:08:44.168820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.975 [2024-11-06 14:08:44.168827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.975 qpair failed and we were unable to recover it. 00:25:04.975 [2024-11-06 14:08:44.169187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.975 [2024-11-06 14:08:44.169194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.975 qpair failed and we were unable to recover it. 00:25:04.975 [2024-11-06 14:08:44.169390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.975 [2024-11-06 14:08:44.169397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.975 qpair failed and we were unable to recover it. 00:25:04.975 [2024-11-06 14:08:44.169698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.975 [2024-11-06 14:08:44.169705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.975 qpair failed and we were unable to recover it. 00:25:04.975 [2024-11-06 14:08:44.169996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.975 [2024-11-06 14:08:44.170003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.975 qpair failed and we were unable to recover it. 00:25:04.975 [2024-11-06 14:08:44.170319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.975 [2024-11-06 14:08:44.170326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.975 qpair failed and we were unable to recover it. 00:25:04.975 [2024-11-06 14:08:44.170547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.975 [2024-11-06 14:08:44.170554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.975 qpair failed and we were unable to recover it. 00:25:04.975 [2024-11-06 14:08:44.170898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.975 [2024-11-06 14:08:44.170905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.975 qpair failed and we were unable to recover it. 00:25:04.975 [2024-11-06 14:08:44.171254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.975 [2024-11-06 14:08:44.171262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.975 qpair failed and we were unable to recover it. 00:25:04.975 [2024-11-06 14:08:44.171570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.975 [2024-11-06 14:08:44.171576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.975 qpair failed and we were unable to recover it. 00:25:04.975 [2024-11-06 14:08:44.171894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.975 [2024-11-06 14:08:44.171901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.975 qpair failed and we were unable to recover it. 00:25:04.975 [2024-11-06 14:08:44.172218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.975 [2024-11-06 14:08:44.172225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.975 qpair failed and we were unable to recover it. 00:25:04.975 [2024-11-06 14:08:44.172544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.975 [2024-11-06 14:08:44.172551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.975 qpair failed and we were unable to recover it. 00:25:04.975 [2024-11-06 14:08:44.172719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.975 [2024-11-06 14:08:44.172726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.975 qpair failed and we were unable to recover it. 00:25:04.975 [2024-11-06 14:08:44.173082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.975 [2024-11-06 14:08:44.173089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.975 qpair failed and we were unable to recover it. 00:25:04.975 [2024-11-06 14:08:44.173420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.975 [2024-11-06 14:08:44.173427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.975 qpair failed and we were unable to recover it. 00:25:04.975 [2024-11-06 14:08:44.173738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.975 [2024-11-06 14:08:44.173746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.975 qpair failed and we were unable to recover it. 00:25:04.975 [2024-11-06 14:08:44.173914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.173922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.174230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.174237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.174543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.174551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.174848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.174855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.175009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.175016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.175346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.175353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.175663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.175670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.175954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.175961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.176315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.176322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.176510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.176517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.176704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.176712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.177014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.177021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.177319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.177326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.177648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.177656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.177845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.177852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.178028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.178035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.178327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.178334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.178649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.178655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.178956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.178962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.179217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.179224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.179578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.179586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.179944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.179951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.180264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.180271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.180426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.180433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.180729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.180736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.181036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.181043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.181270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.181277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.181497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.181504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.181834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.181841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.182198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.182205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.182393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.182400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.182652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.182661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.182971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.182978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.183227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.183235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.183522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.183529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.183824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.183831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.184150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.184157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.184530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.976 [2024-11-06 14:08:44.184537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.976 qpair failed and we were unable to recover it. 00:25:04.976 [2024-11-06 14:08:44.184836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.184843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.185150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.185156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.185459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.185466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.185756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.185764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.186092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.186100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.186374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.186381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.186758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.186765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.186942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.186949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.187334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.187341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.187614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.187621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.187913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.187920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.188079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.188086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.188305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.188312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.188622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.188629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.188920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.188927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.189248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.189255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.189596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.189603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.189909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.189916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.190199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.190206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.190499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.190506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.190856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.190863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.191165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.191172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.191493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.191500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.191765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.191772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.192070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.192077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.192256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.192263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.192550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.192557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.192832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.192839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.193020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.193027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.193342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.193349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.193652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.193658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.193790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.193797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.194054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.194060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.194378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.194386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.194681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.194688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.194989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.194996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.195298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.195305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.195567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.195574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.977 [2024-11-06 14:08:44.195866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.977 [2024-11-06 14:08:44.195873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.977 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.196157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.196164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.196451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.196458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.196818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.196825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.196978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.196986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.197274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.197281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.197463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.197469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.197774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.197781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.198084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.198091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.198401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.198409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.198577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.198585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.198888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.198895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.199211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.199218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.199562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.199569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.199708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.199715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.200034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.200041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.200305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.200312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.200587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.200594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.200889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.200896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.201280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.201287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.201653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.201660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.201826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.201832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.202165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.202172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.202479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.202486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.202672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.202680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.202952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.202958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.203280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.203287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.203587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.203594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.203872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.203879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.204196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.204203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.204589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.204596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.204756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.204764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.205035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.205042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.205350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.205357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.205687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.205694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.206037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.206045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.206200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.206207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.206558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.206565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.206828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.978 [2024-11-06 14:08:44.206835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.978 qpair failed and we were unable to recover it. 00:25:04.978 [2024-11-06 14:08:44.207113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.979 [2024-11-06 14:08:44.207120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.979 qpair failed and we were unable to recover it. 00:25:04.979 [2024-11-06 14:08:44.207409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.979 [2024-11-06 14:08:44.207416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.979 qpair failed and we were unable to recover it. 00:25:04.979 [2024-11-06 14:08:44.207726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.979 [2024-11-06 14:08:44.207733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.979 qpair failed and we were unable to recover it. 00:25:04.979 [2024-11-06 14:08:44.208036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.979 [2024-11-06 14:08:44.208043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.979 qpair failed and we were unable to recover it. 00:25:04.979 [2024-11-06 14:08:44.208352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.979 [2024-11-06 14:08:44.208359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.979 qpair failed and we were unable to recover it. 00:25:04.979 [2024-11-06 14:08:44.208631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.979 [2024-11-06 14:08:44.208638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.979 qpair failed and we were unable to recover it. 00:25:04.979 [2024-11-06 14:08:44.208939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.979 [2024-11-06 14:08:44.208946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.979 qpair failed and we were unable to recover it. 00:25:04.979 [2024-11-06 14:08:44.209272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.979 [2024-11-06 14:08:44.209280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.979 qpair failed and we were unable to recover it. 00:25:04.979 [2024-11-06 14:08:44.209587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.979 [2024-11-06 14:08:44.209594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.979 qpair failed and we were unable to recover it. 00:25:04.979 [2024-11-06 14:08:44.209920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.979 [2024-11-06 14:08:44.209927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.979 qpair failed and we were unable to recover it. 00:25:04.979 [2024-11-06 14:08:44.210096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.979 [2024-11-06 14:08:44.210103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.979 qpair failed and we were unable to recover it. 00:25:04.979 [2024-11-06 14:08:44.210438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.979 [2024-11-06 14:08:44.210446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.979 qpair failed and we were unable to recover it. 00:25:04.979 [2024-11-06 14:08:44.210723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.979 [2024-11-06 14:08:44.210730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.979 qpair failed and we were unable to recover it. 00:25:04.979 [2024-11-06 14:08:44.211055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.979 [2024-11-06 14:08:44.211062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.979 qpair failed and we were unable to recover it. 00:25:04.979 [2024-11-06 14:08:44.211397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.979 [2024-11-06 14:08:44.211405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.979 qpair failed and we were unable to recover it. 00:25:04.979 [2024-11-06 14:08:44.211722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.979 [2024-11-06 14:08:44.211729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.979 qpair failed and we were unable to recover it. 00:25:04.979 [2024-11-06 14:08:44.212021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.979 [2024-11-06 14:08:44.212028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.979 qpair failed and we were unable to recover it. 00:25:04.979 [2024-11-06 14:08:44.212190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.979 [2024-11-06 14:08:44.212197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.979 qpair failed and we were unable to recover it. 00:25:04.979 [2024-11-06 14:08:44.212535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.979 [2024-11-06 14:08:44.212542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.979 qpair failed and we were unable to recover it. 00:25:04.979 [2024-11-06 14:08:44.212827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.979 [2024-11-06 14:08:44.212834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.979 qpair failed and we were unable to recover it. 00:25:04.979 [2024-11-06 14:08:44.213147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.979 [2024-11-06 14:08:44.213154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.979 qpair failed and we were unable to recover it. 00:25:04.979 [2024-11-06 14:08:44.213454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.979 [2024-11-06 14:08:44.213461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.979 qpair failed and we were unable to recover it. 00:25:04.979 [2024-11-06 14:08:44.213620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.979 [2024-11-06 14:08:44.213627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.979 qpair failed and we were unable to recover it. 00:25:04.979 [2024-11-06 14:08:44.213903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.979 [2024-11-06 14:08:44.213911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.979 qpair failed and we were unable to recover it. 00:25:04.979 [2024-11-06 14:08:44.214369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.979 [2024-11-06 14:08:44.214377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.979 qpair failed and we were unable to recover it. 00:25:04.979 [2024-11-06 14:08:44.214677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.979 [2024-11-06 14:08:44.214684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.979 qpair failed and we were unable to recover it. 00:25:04.979 [2024-11-06 14:08:44.214893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.979 [2024-11-06 14:08:44.214900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.979 qpair failed and we were unable to recover it. 00:25:04.979 [2024-11-06 14:08:44.215122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.979 [2024-11-06 14:08:44.215129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.979 qpair failed and we were unable to recover it. 00:25:04.979 [2024-11-06 14:08:44.215360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.979 [2024-11-06 14:08:44.215367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.979 qpair failed and we were unable to recover it. 00:25:04.979 [2024-11-06 14:08:44.215665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.979 [2024-11-06 14:08:44.215672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.979 qpair failed and we were unable to recover it. 00:25:04.979 [2024-11-06 14:08:44.215856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.979 [2024-11-06 14:08:44.215867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.979 qpair failed and we were unable to recover it. 00:25:04.979 [2024-11-06 14:08:44.216108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.979 [2024-11-06 14:08:44.216116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.979 qpair failed and we were unable to recover it. 00:25:04.979 [2024-11-06 14:08:44.216309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.979 [2024-11-06 14:08:44.216316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.979 qpair failed and we were unable to recover it. 00:25:04.979 [2024-11-06 14:08:44.216623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.979 [2024-11-06 14:08:44.216630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.979 qpair failed and we were unable to recover it. 00:25:04.979 [2024-11-06 14:08:44.216922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.980 [2024-11-06 14:08:44.216929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.980 qpair failed and we were unable to recover it. 00:25:04.980 [2024-11-06 14:08:44.217233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.980 [2024-11-06 14:08:44.217240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.980 qpair failed and we were unable to recover it. 00:25:04.980 [2024-11-06 14:08:44.217547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.980 [2024-11-06 14:08:44.217556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.980 qpair failed and we were unable to recover it. 00:25:04.980 [2024-11-06 14:08:44.217847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.980 [2024-11-06 14:08:44.217854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.980 qpair failed and we were unable to recover it. 00:25:04.980 [2024-11-06 14:08:44.218143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.980 [2024-11-06 14:08:44.218149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.980 qpair failed and we were unable to recover it. 00:25:04.980 [2024-11-06 14:08:44.218466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.980 [2024-11-06 14:08:44.218473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.980 qpair failed and we were unable to recover it. 00:25:04.980 [2024-11-06 14:08:44.218787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.980 [2024-11-06 14:08:44.218794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.980 qpair failed and we were unable to recover it. 00:25:04.980 [2024-11-06 14:08:44.218971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.980 [2024-11-06 14:08:44.218978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.980 qpair failed and we were unable to recover it. 00:25:04.980 [2024-11-06 14:08:44.219299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.980 [2024-11-06 14:08:44.219306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.980 qpair failed and we were unable to recover it. 00:25:04.980 [2024-11-06 14:08:44.219596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.980 [2024-11-06 14:08:44.219604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.980 qpair failed and we were unable to recover it. 00:25:04.980 [2024-11-06 14:08:44.219925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.980 [2024-11-06 14:08:44.219934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.980 qpair failed and we were unable to recover it. 00:25:04.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1045546 Killed "${NVMF_APP[@]}" "$@" 00:25:04.980 [2024-11-06 14:08:44.220237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.980 [2024-11-06 14:08:44.220248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.980 qpair failed and we were unable to recover it. 00:25:04.980 [2024-11-06 14:08:44.220541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.980 [2024-11-06 14:08:44.220548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:04.980 qpair failed and we were unable to recover it. 00:25:05.277 [2024-11-06 14:08:44.220905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-11-06 14:08:44.220914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 14:08:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:25:05.277 [2024-11-06 14:08:44.221221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-11-06 14:08:44.221228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 14:08:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:05.277 [2024-11-06 14:08:44.221514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-11-06 14:08:44.221522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 14:08:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:05.277 [2024-11-06 14:08:44.221800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-11-06 14:08:44.221808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 14:08:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:05.277 14:08:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:05.277 [2024-11-06 14:08:44.222195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-11-06 14:08:44.222204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-11-06 14:08:44.222507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-11-06 14:08:44.222515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-11-06 14:08:44.222818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-11-06 14:08:44.222825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-11-06 14:08:44.223139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-11-06 14:08:44.223146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-11-06 14:08:44.223413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-11-06 14:08:44.223420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-11-06 14:08:44.223732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-11-06 14:08:44.223740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-11-06 14:08:44.224025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-11-06 14:08:44.224032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-11-06 14:08:44.224250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-11-06 14:08:44.224257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-11-06 14:08:44.224438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-11-06 14:08:44.224445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-11-06 14:08:44.224786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-11-06 14:08:44.224798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-11-06 14:08:44.225090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-11-06 14:08:44.225098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-11-06 14:08:44.225464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.277 [2024-11-06 14:08:44.225472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.277 qpair failed and we were unable to recover it. 00:25:05.277 [2024-11-06 14:08:44.225757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-11-06 14:08:44.225765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-11-06 14:08:44.226091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-11-06 14:08:44.226099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-11-06 14:08:44.226411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-11-06 14:08:44.226419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-11-06 14:08:44.226777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-11-06 14:08:44.226784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 14:08:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1046577 00:25:05.278 14:08:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1046577 00:25:05.278 [2024-11-06 14:08:44.227091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-11-06 14:08:44.227100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 14:08:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 1046577 ']' 00:25:05.278 14:08:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:05.278 14:08:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.278 14:08:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:05.278 [2024-11-06 14:08:44.227389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-11-06 14:08:44.227398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 14:08:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.278 14:08:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:05.278 14:08:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:05.278 [2024-11-06 14:08:44.227698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-11-06 14:08:44.227707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-11-06 14:08:44.227994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-11-06 14:08:44.228002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-11-06 14:08:44.228208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-11-06 14:08:44.228216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-11-06 14:08:44.228529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-11-06 14:08:44.228538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-11-06 14:08:44.228840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-11-06 14:08:44.228850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-11-06 14:08:44.229189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-11-06 14:08:44.229199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-11-06 14:08:44.229372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-11-06 14:08:44.229380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-11-06 14:08:44.229661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-11-06 14:08:44.229669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-11-06 14:08:44.229962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-11-06 14:08:44.229970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-11-06 14:08:44.230254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-11-06 14:08:44.230262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-11-06 14:08:44.230557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-11-06 14:08:44.230565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-11-06 14:08:44.230854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-11-06 14:08:44.230862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-11-06 14:08:44.231155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-11-06 14:08:44.231163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-11-06 14:08:44.231459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-11-06 14:08:44.231469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-11-06 14:08:44.231636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-11-06 14:08:44.231644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-11-06 14:08:44.231959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-11-06 14:08:44.231967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-11-06 14:08:44.232257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-11-06 14:08:44.232265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-11-06 14:08:44.232601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-11-06 14:08:44.232609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-11-06 14:08:44.232914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-11-06 14:08:44.232922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-11-06 14:08:44.233110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-11-06 14:08:44.233117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-11-06 14:08:44.233407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-11-06 14:08:44.233415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-11-06 14:08:44.233750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-11-06 14:08:44.233757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-11-06 14:08:44.234069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-11-06 14:08:44.234077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-11-06 14:08:44.234381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-11-06 14:08:44.234389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-11-06 14:08:44.234690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-11-06 14:08:44.234697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-11-06 14:08:44.235045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-11-06 14:08:44.235054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-11-06 14:08:44.235336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-11-06 14:08:44.235344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.278 [2024-11-06 14:08:44.235647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.278 [2024-11-06 14:08:44.235656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.278 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.235826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.235834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.236163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.236170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.236464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.236472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.236828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.236836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.237119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.237127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.237412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.237420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.237726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.237733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.238020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.238028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.238303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.238311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.238621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.238629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.238925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.238932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.239137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.239145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.239455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.239463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.239639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.239647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.239947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.239954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.240249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.240256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.240637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.240645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.240852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.240860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.241173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.241181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.241439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.241447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.241737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.241745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.241929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.241936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.242254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.242262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.242564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.242572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.242856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.242863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.243032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.243041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.243336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.243344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.243667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.243674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.243968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.243975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.244298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.244306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.244604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.244611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.244915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.244922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.245222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.245230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.245580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.245588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.245902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.245909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.246195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.246202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.246511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.246519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.246785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.279 [2024-11-06 14:08:44.246792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.279 qpair failed and we were unable to recover it. 00:25:05.279 [2024-11-06 14:08:44.247093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.247101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.280 [2024-11-06 14:08:44.247416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.247423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.280 [2024-11-06 14:08:44.247761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.247768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.280 [2024-11-06 14:08:44.247936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.247944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.280 [2024-11-06 14:08:44.248157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.248164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.280 [2024-11-06 14:08:44.248470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.248478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.280 [2024-11-06 14:08:44.248763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.248771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.280 [2024-11-06 14:08:44.248937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.248945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.280 [2024-11-06 14:08:44.249252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.249259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.280 [2024-11-06 14:08:44.249554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.249561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.280 [2024-11-06 14:08:44.249860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.249867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.280 [2024-11-06 14:08:44.250195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.250203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.280 [2024-11-06 14:08:44.250486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.250493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.280 [2024-11-06 14:08:44.250766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.250773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.280 [2024-11-06 14:08:44.251078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.251085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.280 [2024-11-06 14:08:44.251283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.251290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.280 [2024-11-06 14:08:44.251623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.251631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.280 [2024-11-06 14:08:44.251914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.251921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.280 [2024-11-06 14:08:44.252286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.252294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.280 [2024-11-06 14:08:44.252460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.252468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.280 [2024-11-06 14:08:44.252765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.252772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.280 [2024-11-06 14:08:44.253063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.253070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.280 [2024-11-06 14:08:44.253378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.253385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.280 [2024-11-06 14:08:44.253737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.253744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.280 [2024-11-06 14:08:44.254028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.254035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.280 [2024-11-06 14:08:44.254319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.254326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.280 [2024-11-06 14:08:44.254627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.254634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.280 [2024-11-06 14:08:44.254976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.254985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.280 [2024-11-06 14:08:44.255269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.255276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.280 [2024-11-06 14:08:44.255593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.255600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.280 [2024-11-06 14:08:44.255937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.255944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.280 [2024-11-06 14:08:44.256251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.256258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.280 [2024-11-06 14:08:44.256541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.256548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.280 [2024-11-06 14:08:44.256714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.256722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.280 [2024-11-06 14:08:44.257026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.257033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.280 [2024-11-06 14:08:44.257404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.257411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.280 [2024-11-06 14:08:44.257719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.257726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.280 [2024-11-06 14:08:44.258085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.280 [2024-11-06 14:08:44.258092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.280 qpair failed and we were unable to recover it. 00:25:05.281 [2024-11-06 14:08:44.258430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.281 [2024-11-06 14:08:44.258438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.281 qpair failed and we were unable to recover it. 00:25:05.281 [2024-11-06 14:08:44.258729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.281 [2024-11-06 14:08:44.258736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.281 qpair failed and we were unable to recover it. 00:25:05.281 [2024-11-06 14:08:44.259026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.281 [2024-11-06 14:08:44.259033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.281 qpair failed and we were unable to recover it. 00:25:05.281 [2024-11-06 14:08:44.259333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.281 [2024-11-06 14:08:44.259340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.281 qpair failed and we were unable to recover it. 00:25:05.281 [2024-11-06 14:08:44.259684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.281 [2024-11-06 14:08:44.259691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.281 qpair failed and we were unable to recover it. 00:25:05.281 [2024-11-06 14:08:44.259898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.281 [2024-11-06 14:08:44.259905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.281 qpair failed and we were unable to recover it. 00:25:05.281 [2024-11-06 14:08:44.260207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.281 [2024-11-06 14:08:44.260214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.281 qpair failed and we were unable to recover it. 00:25:05.281 [2024-11-06 14:08:44.260553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.281 [2024-11-06 14:08:44.260560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.281 qpair failed and we were unable to recover it. 00:25:05.281 [2024-11-06 14:08:44.260847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.281 [2024-11-06 14:08:44.260854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.281 qpair failed and we were unable to recover it. 00:25:05.281 [2024-11-06 14:08:44.261160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.281 [2024-11-06 14:08:44.261167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.281 qpair failed and we were unable to recover it. 00:25:05.281 [2024-11-06 14:08:44.261354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.281 [2024-11-06 14:08:44.261361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.281 qpair failed and we were unable to recover it. 00:25:05.281 [2024-11-06 14:08:44.261662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.281 [2024-11-06 14:08:44.261669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.281 qpair failed and we were unable to recover it. 00:25:05.281 [2024-11-06 14:08:44.261991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.281 [2024-11-06 14:08:44.261998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.281 qpair failed and we were unable to recover it. 00:25:05.281 [2024-11-06 14:08:44.262283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.281 [2024-11-06 14:08:44.262291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.281 qpair failed and we were unable to recover it. 00:25:05.281 [2024-11-06 14:08:44.262287] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:25:05.281 [2024-11-06 14:08:44.262339] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:05.281 [2024-11-06 14:08:44.262582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.281 [2024-11-06 14:08:44.262590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.281 qpair failed and we were unable to recover it. 00:25:05.281 [2024-11-06 14:08:44.262870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.281 [2024-11-06 14:08:44.262878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.281 qpair failed and we were unable to recover it. 00:25:05.281 [2024-11-06 14:08:44.263166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.281 [2024-11-06 14:08:44.263174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.281 qpair failed and we were unable to recover it. 00:25:05.281 [2024-11-06 14:08:44.263494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.281 [2024-11-06 14:08:44.263502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.281 qpair failed and we were unable to recover it. 00:25:05.281 [2024-11-06 14:08:44.263779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.281 [2024-11-06 14:08:44.263787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.281 qpair failed and we were unable to recover it. 00:25:05.281 [2024-11-06 14:08:44.264116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.281 [2024-11-06 14:08:44.264123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.281 qpair failed and we were unable to recover it. 00:25:05.281 [2024-11-06 14:08:44.264422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.281 [2024-11-06 14:08:44.264430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.281 qpair failed and we were unable to recover it. 00:25:05.281 [2024-11-06 14:08:44.264717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.281 [2024-11-06 14:08:44.264724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.281 qpair failed and we were unable to recover it. 00:25:05.281 [2024-11-06 14:08:44.265021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.281 [2024-11-06 14:08:44.265028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.281 qpair failed and we were unable to recover it. 00:25:05.281 [2024-11-06 14:08:44.265314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.281 [2024-11-06 14:08:44.265322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.281 qpair failed and we were unable to recover it. 00:25:05.281 [2024-11-06 14:08:44.265646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.281 [2024-11-06 14:08:44.265654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.281 qpair failed and we were unable to recover it. 00:25:05.281 [2024-11-06 14:08:44.265970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.281 [2024-11-06 14:08:44.265978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.281 qpair failed and we were unable to recover it. 00:25:05.281 [2024-11-06 14:08:44.266255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.281 [2024-11-06 14:08:44.266263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.281 qpair failed and we were unable to recover it. 00:25:05.281 [2024-11-06 14:08:44.266538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.281 [2024-11-06 14:08:44.266545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.281 qpair failed and we were unable to recover it. 00:25:05.281 [2024-11-06 14:08:44.266827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.281 [2024-11-06 14:08:44.266836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.281 qpair failed and we were unable to recover it. 00:25:05.281 [2024-11-06 14:08:44.267117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.281 [2024-11-06 14:08:44.267124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.281 qpair failed and we were unable to recover it. 00:25:05.281 [2024-11-06 14:08:44.267405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.281 [2024-11-06 14:08:44.267412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.281 qpair failed and we were unable to recover it. 00:25:05.281 [2024-11-06 14:08:44.267705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.281 [2024-11-06 14:08:44.267713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.281 qpair failed and we were unable to recover it. 00:25:05.281 [2024-11-06 14:08:44.268030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.281 [2024-11-06 14:08:44.268037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.281 qpair failed and we were unable to recover it. 00:25:05.281 [2024-11-06 14:08:44.268349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.281 [2024-11-06 14:08:44.268357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.281 qpair failed and we were unable to recover it. 00:25:05.281 [2024-11-06 14:08:44.268638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.281 [2024-11-06 14:08:44.268645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.281 qpair failed and we were unable to recover it. 00:25:05.281 [2024-11-06 14:08:44.268928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.282 [2024-11-06 14:08:44.268935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.282 qpair failed and we were unable to recover it. 00:25:05.282 [2024-11-06 14:08:44.269102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.282 [2024-11-06 14:08:44.269110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.282 qpair failed and we were unable to recover it. 00:25:05.282 [2024-11-06 14:08:44.269311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.282 [2024-11-06 14:08:44.269318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.282 qpair failed and we were unable to recover it. 00:25:05.282 [2024-11-06 14:08:44.269610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.282 [2024-11-06 14:08:44.269618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.282 qpair failed and we were unable to recover it. 00:25:05.282 [2024-11-06 14:08:44.269962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.282 [2024-11-06 14:08:44.269969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.282 qpair failed and we were unable to recover it. 00:25:05.282 [2024-11-06 14:08:44.270177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.282 [2024-11-06 14:08:44.270184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.282 qpair failed and we were unable to recover it. 00:25:05.282 [2024-11-06 14:08:44.270347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.282 [2024-11-06 14:08:44.270354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.282 qpair failed and we were unable to recover it. 00:25:05.282 [2024-11-06 14:08:44.270632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.282 [2024-11-06 14:08:44.270639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.282 qpair failed and we were unable to recover it. 00:25:05.282 [2024-11-06 14:08:44.270802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.282 [2024-11-06 14:08:44.270811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.282 qpair failed and we were unable to recover it. 00:25:05.282 [2024-11-06 14:08:44.271119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.282 [2024-11-06 14:08:44.271126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.282 qpair failed and we were unable to recover it. 00:25:05.282 [2024-11-06 14:08:44.271470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.282 [2024-11-06 14:08:44.271478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.282 qpair failed and we were unable to recover it. 00:25:05.282 [2024-11-06 14:08:44.271803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.282 [2024-11-06 14:08:44.271811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.282 qpair failed and we were unable to recover it. 00:25:05.282 [2024-11-06 14:08:44.272077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.282 [2024-11-06 14:08:44.272084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.282 qpair failed and we were unable to recover it. 00:25:05.282 [2024-11-06 14:08:44.272385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.282 [2024-11-06 14:08:44.272392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.282 qpair failed and we were unable to recover it. 00:25:05.282 [2024-11-06 14:08:44.272682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.282 [2024-11-06 14:08:44.272689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.282 qpair failed and we were unable to recover it. 00:25:05.282 [2024-11-06 14:08:44.272915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.282 [2024-11-06 14:08:44.272923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.282 qpair failed and we were unable to recover it. 00:25:05.282 [2024-11-06 14:08:44.273212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.282 [2024-11-06 14:08:44.273220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.282 qpair failed and we were unable to recover it. 00:25:05.282 [2024-11-06 14:08:44.273521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.282 [2024-11-06 14:08:44.273529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.282 qpair failed and we were unable to recover it. 00:25:05.282 [2024-11-06 14:08:44.273811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.282 [2024-11-06 14:08:44.273819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.282 qpair failed and we were unable to recover it. 00:25:05.282 [2024-11-06 14:08:44.274100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.282 [2024-11-06 14:08:44.274108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.282 qpair failed and we were unable to recover it. 00:25:05.282 [2024-11-06 14:08:44.274396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.282 [2024-11-06 14:08:44.274404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.282 qpair failed and we were unable to recover it. 00:25:05.282 [2024-11-06 14:08:44.274698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.282 [2024-11-06 14:08:44.274705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.282 qpair failed and we were unable to recover it. 00:25:05.282 [2024-11-06 14:08:44.274987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.282 [2024-11-06 14:08:44.274994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.282 qpair failed and we were unable to recover it. 00:25:05.282 [2024-11-06 14:08:44.275307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.282 [2024-11-06 14:08:44.275315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.282 qpair failed and we were unable to recover it. 00:25:05.282 [2024-11-06 14:08:44.275602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.282 [2024-11-06 14:08:44.275610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.282 qpair failed and we were unable to recover it. 00:25:05.282 [2024-11-06 14:08:44.275934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.282 [2024-11-06 14:08:44.275942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.282 qpair failed and we were unable to recover it. 00:25:05.282 [2024-11-06 14:08:44.276224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.282 [2024-11-06 14:08:44.276232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.282 qpair failed and we were unable to recover it. 00:25:05.282 [2024-11-06 14:08:44.276534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.282 [2024-11-06 14:08:44.276542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.282 qpair failed and we were unable to recover it. 00:25:05.282 [2024-11-06 14:08:44.276827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.282 [2024-11-06 14:08:44.276834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.282 qpair failed and we were unable to recover it. 00:25:05.282 [2024-11-06 14:08:44.277157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.282 [2024-11-06 14:08:44.277165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.282 qpair failed and we were unable to recover it. 00:25:05.282 [2024-11-06 14:08:44.277479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.282 [2024-11-06 14:08:44.277486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.282 qpair failed and we were unable to recover it. 00:25:05.282 [2024-11-06 14:08:44.277827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.282 [2024-11-06 14:08:44.277835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.282 qpair failed and we were unable to recover it. 00:25:05.282 [2024-11-06 14:08:44.278132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.282 [2024-11-06 14:08:44.278139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.282 qpair failed and we were unable to recover it. 00:25:05.282 [2024-11-06 14:08:44.278456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.282 [2024-11-06 14:08:44.278467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.278759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.278766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.279074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.279082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.279358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.279366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.279641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.279648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.279925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.279933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.280223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.280231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.280522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.280529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.280812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.280819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.280984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.280992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.281170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.281178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.281479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.281487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.281773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.281780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.282066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.282074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.282362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.282369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.282653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.282660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.282944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.282951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.283107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.283115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.283318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.283326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.283618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.283624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.284000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.284007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.284306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.284313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.284618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.284625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.284919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.284926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.285229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.285236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.285577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.285585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.285868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.285875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.286171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.286178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.286477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.286485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.286830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.286837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.286990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.286998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.287359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.287366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.287653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.287660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.287961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.287968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.288299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.288306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.288687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.288694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.288999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.289006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.289199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.289206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.283 [2024-11-06 14:08:44.289527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.283 [2024-11-06 14:08:44.289534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.283 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.289883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.284 [2024-11-06 14:08:44.289890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.284 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.290238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.284 [2024-11-06 14:08:44.290256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.284 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.290534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.284 [2024-11-06 14:08:44.290541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.284 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.290856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.284 [2024-11-06 14:08:44.290863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.284 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.291150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.284 [2024-11-06 14:08:44.291157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.284 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.291445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.284 [2024-11-06 14:08:44.291453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.284 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.291800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.284 [2024-11-06 14:08:44.291807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.284 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.292098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.284 [2024-11-06 14:08:44.292105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.284 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.292418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.284 [2024-11-06 14:08:44.292425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.284 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.292742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.284 [2024-11-06 14:08:44.292749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.284 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.293047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.284 [2024-11-06 14:08:44.293055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.284 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.293371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.284 [2024-11-06 14:08:44.293379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.284 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.293690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.284 [2024-11-06 14:08:44.293697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.284 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.294037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.284 [2024-11-06 14:08:44.294044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.284 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.294348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.284 [2024-11-06 14:08:44.294355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.284 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.294653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.284 [2024-11-06 14:08:44.294660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.284 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.294989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.284 [2024-11-06 14:08:44.294996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.284 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.295288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.284 [2024-11-06 14:08:44.295295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.284 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.295656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.284 [2024-11-06 14:08:44.295663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.284 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.295978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.284 [2024-11-06 14:08:44.295984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.284 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.296280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.284 [2024-11-06 14:08:44.296287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.284 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.296593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.284 [2024-11-06 14:08:44.296601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.284 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.296768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.284 [2024-11-06 14:08:44.296776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.284 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.297095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.284 [2024-11-06 14:08:44.297103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.284 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.297395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.284 [2024-11-06 14:08:44.297403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.284 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.297686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.284 [2024-11-06 14:08:44.297694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.284 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.298000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.284 [2024-11-06 14:08:44.298007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.284 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.298340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.284 [2024-11-06 14:08:44.298348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.284 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.298680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.284 [2024-11-06 14:08:44.298687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.284 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.298971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.284 [2024-11-06 14:08:44.298978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.284 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.299167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.284 [2024-11-06 14:08:44.299174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.284 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.299464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.284 [2024-11-06 14:08:44.299471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.284 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.299645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.284 [2024-11-06 14:08:44.299652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.284 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.299862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.284 [2024-11-06 14:08:44.299870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.284 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.300149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.284 [2024-11-06 14:08:44.300156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.284 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.300476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.284 [2024-11-06 14:08:44.300484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.284 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.300769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.284 [2024-11-06 14:08:44.300777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.284 qpair failed and we were unable to recover it. 00:25:05.284 [2024-11-06 14:08:44.301075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.301082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.301348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.301356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.301721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.301728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.301930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.301937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.302169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.302179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.302433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.302440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.302738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.302745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.303083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.303090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.303385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.303392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.303588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.303595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.303855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.303862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.304229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.304236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.304546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.304553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.304863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.304870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.305152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.305159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.305447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.305454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.305753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.305760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.306048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.306055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.306378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.306386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.306716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.306724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.307031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.307038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.307328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.307335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.307665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.307672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.307849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.307855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.308127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.308134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.308500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.308506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.308826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.308833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.309008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.309016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.309336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.309343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.309680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.309687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.309855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.309863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.310081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.310088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.310358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.310366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.310668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.310675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.310967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.310974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.311269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.311276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.311558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.311565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.311939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.285 [2024-11-06 14:08:44.311946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.285 qpair failed and we were unable to recover it. 00:25:05.285 [2024-11-06 14:08:44.312238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.286 [2024-11-06 14:08:44.312248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.286 qpair failed and we were unable to recover it. 00:25:05.286 [2024-11-06 14:08:44.312594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.286 [2024-11-06 14:08:44.312602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.286 qpair failed and we were unable to recover it. 00:25:05.286 [2024-11-06 14:08:44.312892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.286 [2024-11-06 14:08:44.312899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.286 qpair failed and we were unable to recover it. 00:25:05.286 [2024-11-06 14:08:44.313208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.286 [2024-11-06 14:08:44.313215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.286 qpair failed and we were unable to recover it. 00:25:05.286 [2024-11-06 14:08:44.313383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.286 [2024-11-06 14:08:44.313392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.286 qpair failed and we were unable to recover it. 00:25:05.286 [2024-11-06 14:08:44.313652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.286 [2024-11-06 14:08:44.313658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.286 qpair failed and we were unable to recover it. 00:25:05.286 [2024-11-06 14:08:44.313954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.286 [2024-11-06 14:08:44.313962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.286 qpair failed and we were unable to recover it. 00:25:05.286 [2024-11-06 14:08:44.314155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.286 [2024-11-06 14:08:44.314162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.286 qpair failed and we were unable to recover it. 00:25:05.286 [2024-11-06 14:08:44.314480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.286 [2024-11-06 14:08:44.314487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.286 qpair failed and we were unable to recover it. 00:25:05.286 [2024-11-06 14:08:44.314772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.286 [2024-11-06 14:08:44.314779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.286 qpair failed and we were unable to recover it. 00:25:05.286 [2024-11-06 14:08:44.315068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.286 [2024-11-06 14:08:44.315074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.286 qpair failed and we were unable to recover it. 00:25:05.286 [2024-11-06 14:08:44.315376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.286 [2024-11-06 14:08:44.315384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.286 qpair failed and we were unable to recover it. 00:25:05.286 [2024-11-06 14:08:44.315692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.286 [2024-11-06 14:08:44.315698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.286 qpair failed and we were unable to recover it. 00:25:05.286 [2024-11-06 14:08:44.315874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.286 [2024-11-06 14:08:44.315882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.286 qpair failed and we were unable to recover it. 00:25:05.286 [2024-11-06 14:08:44.316222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.286 [2024-11-06 14:08:44.316229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.286 qpair failed and we were unable to recover it. 00:25:05.286 [2024-11-06 14:08:44.316575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.286 [2024-11-06 14:08:44.316582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.286 qpair failed and we were unable to recover it. 00:25:05.286 [2024-11-06 14:08:44.316753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.286 [2024-11-06 14:08:44.316761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.286 qpair failed and we were unable to recover it. 00:25:05.286 [2024-11-06 14:08:44.316943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.286 [2024-11-06 14:08:44.316950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.286 qpair failed and we were unable to recover it. 00:25:05.286 [2024-11-06 14:08:44.317247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.286 [2024-11-06 14:08:44.317255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.286 qpair failed and we were unable to recover it. 00:25:05.286 [2024-11-06 14:08:44.317591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.286 [2024-11-06 14:08:44.317599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.286 qpair failed and we were unable to recover it. 00:25:05.286 [2024-11-06 14:08:44.317919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.286 [2024-11-06 14:08:44.317927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.286 qpair failed and we were unable to recover it. 00:25:05.286 [2024-11-06 14:08:44.318235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.286 [2024-11-06 14:08:44.318242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.286 qpair failed and we were unable to recover it. 00:25:05.286 [2024-11-06 14:08:44.318535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.286 [2024-11-06 14:08:44.318542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.286 qpair failed and we were unable to recover it. 00:25:05.286 [2024-11-06 14:08:44.318861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.286 [2024-11-06 14:08:44.318868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.286 qpair failed and we were unable to recover it. 00:25:05.286 [2024-11-06 14:08:44.319158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.286 [2024-11-06 14:08:44.319165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.286 qpair failed and we were unable to recover it. 00:25:05.286 [2024-11-06 14:08:44.319355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.286 [2024-11-06 14:08:44.319363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.286 qpair failed and we were unable to recover it. 00:25:05.286 [2024-11-06 14:08:44.319584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.286 [2024-11-06 14:08:44.319591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.286 qpair failed and we were unable to recover it. 00:25:05.286 [2024-11-06 14:08:44.319908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.286 [2024-11-06 14:08:44.319915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.286 qpair failed and we were unable to recover it. 00:25:05.286 [2024-11-06 14:08:44.320207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.286 [2024-11-06 14:08:44.320215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.286 qpair failed and we were unable to recover it. 00:25:05.286 [2024-11-06 14:08:44.320592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.286 [2024-11-06 14:08:44.320600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.286 qpair failed and we were unable to recover it. 00:25:05.286 [2024-11-06 14:08:44.320893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.286 [2024-11-06 14:08:44.320901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.286 qpair failed and we were unable to recover it. 00:25:05.286 [2024-11-06 14:08:44.321053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.286 [2024-11-06 14:08:44.321061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.286 qpair failed and we were unable to recover it. 00:25:05.286 [2024-11-06 14:08:44.321362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.286 [2024-11-06 14:08:44.321369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.286 qpair failed and we were unable to recover it. 00:25:05.286 [2024-11-06 14:08:44.321678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.286 [2024-11-06 14:08:44.321686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.286 qpair failed and we were unable to recover it. 00:25:05.286 [2024-11-06 14:08:44.322080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.322088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.322249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.322256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.322551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.322558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.322795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.322802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.322981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.322989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.323172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.323180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.323517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.323525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.323826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.323833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.324131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.324138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.324432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.324439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.324755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.324762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.325152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.325160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.325310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.325319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.325598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.325605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.325922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.325930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.326263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.326270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.326583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.326591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.326930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.326937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.327260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.327267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.327615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.327622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.327974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.327981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.328277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.328285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.328466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.328474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.328756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.328764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.329127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.329134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.329477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.329485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.329877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.329885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.330071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.330078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.330470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.330478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.330797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.330805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.331229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.331238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.331553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.331561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.331737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.331744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.332110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.332118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.332411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.332418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.332709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.332716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.333053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.333060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.333264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.287 [2024-11-06 14:08:44.333272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.287 qpair failed and we were unable to recover it. 00:25:05.287 [2024-11-06 14:08:44.333605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.333612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.333765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.333773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.334112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.334120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.334405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.334413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.334731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.334738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.334888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.334895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.335078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.335086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.335386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.335394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.335722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.335729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.335938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.335947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.336271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.336279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.336591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.336599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.336877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.336885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.337204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.337212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.337522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.337531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.337555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:05.288 [2024-11-06 14:08:44.337861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.337869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.338156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.338164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.338495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.338503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.338818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.338826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.338979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.338987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.339282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.339290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.339659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.339667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.339862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.339869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.340238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.340251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.340568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.340576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.340870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.340877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.341055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.341062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.341358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.341390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.341716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.341724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.342075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.342083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.342131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.342138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.342275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.342282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.342592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.342599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.342902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.342910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.343191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.343198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.343513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.343521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.343821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.343829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.343997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.288 [2024-11-06 14:08:44.344006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.288 qpair failed and we were unable to recover it. 00:25:05.288 [2024-11-06 14:08:44.344221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.344230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.344479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.344488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.344839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.344847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.345150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.345158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.345462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.345470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.345762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.345769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.346107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.346114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.346409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.346417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.346757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.346764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.347130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.347138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.347395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.347404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.347490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.347498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.347799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.347807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.348101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.348108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.348412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.348420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.348795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.348803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.349099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.349107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.349448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.349456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.349630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.349638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.349932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.349940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.350027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.350034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.350315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.350323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.350627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.350635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.350947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.350955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.351254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.351262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.351550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.351558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.351867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.351875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.352212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.352219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.352419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.352427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.352728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.352737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.352898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.352905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.353188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.353196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.353397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.353405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.353675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.353683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.353966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.353973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.354144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.354152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.354409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.354417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.354787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.354795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.289 [2024-11-06 14:08:44.354977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.289 [2024-11-06 14:08:44.354984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.289 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.355145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.355153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.355492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.355499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.355666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.355674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.355858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.355865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.356247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.356255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.356438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.356445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.356786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.356793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.356984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.356992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.357299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.357307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.357608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.357616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.357784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.357792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.357989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.357996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.358331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.358339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.358501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.358508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.358785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.358793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.359106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.359114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.359318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.359326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.359531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.359541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.359961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.359969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.360280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.360288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.360450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.360457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.360582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.360590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.360889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.360897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.361163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.361170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.361473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.361481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.361768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.361775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.361946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.361955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.362304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.362312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.362607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.362615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.362784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.362792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.362966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.362974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.363255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.363263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.363555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.363562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.363869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.363877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.364203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.364211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.364557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.364565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.364921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.364929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.290 [2024-11-06 14:08:44.365215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.290 [2024-11-06 14:08:44.365222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.290 qpair failed and we were unable to recover it. 00:25:05.291 [2024-11-06 14:08:44.365423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.291 [2024-11-06 14:08:44.365431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.291 qpair failed and we were unable to recover it. 00:25:05.291 [2024-11-06 14:08:44.365611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.291 [2024-11-06 14:08:44.365617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.291 qpair failed and we were unable to recover it. 00:25:05.291 [2024-11-06 14:08:44.365908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.291 [2024-11-06 14:08:44.365915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.291 qpair failed and we were unable to recover it. 00:25:05.291 [2024-11-06 14:08:44.366229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.291 [2024-11-06 14:08:44.366238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.291 qpair failed and we were unable to recover it. 00:25:05.291 [2024-11-06 14:08:44.366436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.291 [2024-11-06 14:08:44.366444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.291 qpair failed and we were unable to recover it. 00:25:05.291 [2024-11-06 14:08:44.366728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.291 [2024-11-06 14:08:44.366736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.291 qpair failed and we were unable to recover it. 00:25:05.291 [2024-11-06 14:08:44.367093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.291 [2024-11-06 14:08:44.367101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.291 qpair failed and we were unable to recover it. 00:25:05.291 [2024-11-06 14:08:44.367201] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:05.291 [2024-11-06 14:08:44.367225] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:05.291 [2024-11-06 14:08:44.367231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:05.291 [2024-11-06 14:08:44.367237] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:05.291 [2024-11-06 14:08:44.367242] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:05.291 [2024-11-06 14:08:44.367418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.291 [2024-11-06 14:08:44.367426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.291 qpair failed and we were unable to recover it. 00:25:05.291 [2024-11-06 14:08:44.367483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.291 [2024-11-06 14:08:44.367490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.291 qpair failed and we were unable to recover it. 00:25:05.291 [2024-11-06 14:08:44.367660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.291 [2024-11-06 14:08:44.367667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.291 qpair failed and we were unable to recover it. 00:25:05.291 [2024-11-06 14:08:44.367825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.291 [2024-11-06 14:08:44.367832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.291 qpair failed and we were unable to recover it. 00:25:05.291 [2024-11-06 14:08:44.368047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.291 [2024-11-06 14:08:44.368054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.291 qpair failed and we were unable to recover it. 00:25:05.291 [2024-11-06 14:08:44.368224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.291 [2024-11-06 14:08:44.368231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.291 qpair failed and we were unable to recover it. 00:25:05.291 [2024-11-06 14:08:44.368497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:25:05.291 [2024-11-06 14:08:44.368702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.291 [2024-11-06 14:08:44.368710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.291 qpair failed and we were unable to recover it. 00:25:05.291 [2024-11-06 14:08:44.368635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:25:05.291 [2024-11-06 14:08:44.368756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:05.291 [2024-11-06 14:08:44.368758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:25:05.291 [2024-11-06 14:08:44.368982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.291 [2024-11-06 14:08:44.368990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.291 qpair failed and we were unable to recover it. 00:25:05.291 [2024-11-06 14:08:44.369261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.291 [2024-11-06 14:08:44.369268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.291 qpair failed and we were unable to recover it. 00:25:05.291 [2024-11-06 14:08:44.369468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.291 [2024-11-06 14:08:44.369475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.291 qpair failed and we were unable to recover it. 00:25:05.291 [2024-11-06 14:08:44.369760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.291 [2024-11-06 14:08:44.369767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.291 qpair failed and we were unable to recover it. 00:25:05.291 [2024-11-06 14:08:44.370101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.291 [2024-11-06 14:08:44.370108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.291 qpair failed and we were unable to recover it. 00:25:05.291 [2024-11-06 14:08:44.370432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.291 [2024-11-06 14:08:44.370439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.291 qpair failed and we were unable to recover it. 00:25:05.291 [2024-11-06 14:08:44.370730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.291 [2024-11-06 14:08:44.370737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.291 qpair failed and we were unable to recover it. 00:25:05.291 [2024-11-06 14:08:44.371035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.291 [2024-11-06 14:08:44.371042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.291 qpair failed and we were unable to recover it. 00:25:05.291 [2024-11-06 14:08:44.371256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.291 [2024-11-06 14:08:44.371264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.291 qpair failed and we were unable to recover it. 00:25:05.291 [2024-11-06 14:08:44.371597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.291 [2024-11-06 14:08:44.371604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.291 qpair failed and we were unable to recover it. 00:25:05.291 [2024-11-06 14:08:44.371937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.291 [2024-11-06 14:08:44.371944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.291 qpair failed and we were unable to recover it. 00:25:05.291 [2024-11-06 14:08:44.372261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.291 [2024-11-06 14:08:44.372269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.291 qpair failed and we were unable to recover it. 00:25:05.291 [2024-11-06 14:08:44.372579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.291 [2024-11-06 14:08:44.372586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.291 qpair failed and we were unable to recover it. 00:25:05.291 [2024-11-06 14:08:44.372937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.291 [2024-11-06 14:08:44.372943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.291 qpair failed and we were unable to recover it. 00:25:05.291 [2024-11-06 14:08:44.373147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.291 [2024-11-06 14:08:44.373154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.291 qpair failed and we were unable to recover it. 00:25:05.291 [2024-11-06 14:08:44.373459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.291 [2024-11-06 14:08:44.373469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.291 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.373749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.373757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.374033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.374041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.374345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.374353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.374664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.374672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.375033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.375040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.375353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.375361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.375580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.375587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.375914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.375921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.376210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.376217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.376562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.376569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.376907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.376914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.377232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.377239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.377448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.377455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.377782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.377790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.378089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.378096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.378436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.378444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.378727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.378735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.379030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.379037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.379225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.379232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.379569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.379577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.379779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.379786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.380118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.380124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.380309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.380316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.380687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.380694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.380863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.380870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.381201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.381208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.381540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.381548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.381730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.381737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.381943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.381950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.382141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.382148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.382549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.382556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.382900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.382907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.383117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.383124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.383324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.383331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.383655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.383662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.383960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.383967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.384157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.384164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.384475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.292 [2024-11-06 14:08:44.384482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.292 qpair failed and we were unable to recover it. 00:25:05.292 [2024-11-06 14:08:44.384800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.384807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.384861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.384869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.384915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.384923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.385240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.385251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.385569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.385577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.385936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.385944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.386255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.386263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.386592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.386600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.386806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.386813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.386850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.386857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.387202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.387210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.387524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.387532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.387839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.387847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.388212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.388220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.388511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.388519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.388819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.388827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.389131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.389139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.389526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.389534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.389845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.389852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.390182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.390190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.390386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.390394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.390695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.390702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.391062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.391069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.391355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.391362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.391779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.391787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.392108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.392116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.392423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.392431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.392636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.392644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.392980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.392989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.393153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.393161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.393493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.393500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.393810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.393817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.394138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.394145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.394287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.394293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.394494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.394501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.394847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.394855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.395035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.395042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.395226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.293 [2024-11-06 14:08:44.395234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-11-06 14:08:44.395633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.395641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-11-06 14:08:44.395799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.395807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-11-06 14:08:44.396098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.396105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-11-06 14:08:44.396407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.396416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-11-06 14:08:44.396729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.396737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-11-06 14:08:44.396936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.396943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-11-06 14:08:44.397337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.397345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-11-06 14:08:44.397497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.397505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-11-06 14:08:44.397883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.397890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-11-06 14:08:44.398073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.398080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-11-06 14:08:44.398354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.398361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-11-06 14:08:44.398687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.398694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-11-06 14:08:44.399052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.399059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-11-06 14:08:44.399373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.399380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-11-06 14:08:44.399543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.399549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-11-06 14:08:44.399891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.399898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-11-06 14:08:44.400203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.400211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-11-06 14:08:44.400516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.400524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-11-06 14:08:44.400878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.400884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-11-06 14:08:44.401233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.401241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-11-06 14:08:44.401563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.401571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-11-06 14:08:44.401753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.401760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-11-06 14:08:44.401958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.401966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-11-06 14:08:44.402287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.402294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-11-06 14:08:44.402630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.402637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-11-06 14:08:44.402965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.402972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-11-06 14:08:44.403167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.403174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-11-06 14:08:44.403485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.403492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-11-06 14:08:44.403858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.403865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-11-06 14:08:44.404152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.404159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-11-06 14:08:44.404549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.404557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-11-06 14:08:44.404885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.404893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-11-06 14:08:44.405217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.405224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-11-06 14:08:44.405381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.405389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-11-06 14:08:44.405668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.405676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-11-06 14:08:44.406008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.406016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-11-06 14:08:44.406162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.406169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-11-06 14:08:44.406455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.294 [2024-11-06 14:08:44.406462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.295 [2024-11-06 14:08:44.406621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.295 [2024-11-06 14:08:44.406629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-11-06 14:08:44.406758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.295 [2024-11-06 14:08:44.406764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-11-06 14:08:44.407064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.295 [2024-11-06 14:08:44.407071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-11-06 14:08:44.407376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.295 [2024-11-06 14:08:44.407383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-11-06 14:08:44.407587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.295 [2024-11-06 14:08:44.407594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-11-06 14:08:44.407921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.295 [2024-11-06 14:08:44.407931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-11-06 14:08:44.408102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.295 [2024-11-06 14:08:44.408110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-11-06 14:08:44.408327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.295 [2024-11-06 14:08:44.408335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-11-06 14:08:44.408665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.295 [2024-11-06 14:08:44.408672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-11-06 14:08:44.408745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.295 [2024-11-06 14:08:44.408751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-11-06 14:08:44.408949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.295 [2024-11-06 14:08:44.408957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-11-06 14:08:44.409145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.295 [2024-11-06 14:08:44.409152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-11-06 14:08:44.409346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.295 [2024-11-06 14:08:44.409354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-11-06 14:08:44.409743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.295 [2024-11-06 14:08:44.409750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-11-06 14:08:44.409918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.295 [2024-11-06 14:08:44.409925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-11-06 14:08:44.410249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.295 [2024-11-06 14:08:44.410256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-11-06 14:08:44.410546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.295 [2024-11-06 14:08:44.410552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-11-06 14:08:44.410864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.295 [2024-11-06 14:08:44.410871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-11-06 14:08:44.411260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.295 [2024-11-06 14:08:44.411267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-11-06 14:08:44.411305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.295 [2024-11-06 14:08:44.411312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-11-06 14:08:44.411345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.295 [2024-11-06 14:08:44.411351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-11-06 14:08:44.411692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.295 [2024-11-06 14:08:44.411698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-11-06 14:08:44.411877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.295 [2024-11-06 14:08:44.411884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-11-06 14:08:44.412105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.295 [2024-11-06 14:08:44.412111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-11-06 14:08:44.412417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.295 [2024-11-06 14:08:44.412425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-11-06 14:08:44.412707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.295 [2024-11-06 14:08:44.412715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-11-06 14:08:44.413008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.295 [2024-11-06 14:08:44.413015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-11-06 14:08:44.413332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.295 [2024-11-06 14:08:44.413339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-11-06 14:08:44.413380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.295 [2024-11-06 14:08:44.413386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-11-06 14:08:44.413546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.295 [2024-11-06 14:08:44.413554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-11-06 14:08:44.413774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.295 [2024-11-06 14:08:44.413781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-11-06 14:08:44.413967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.295 [2024-11-06 14:08:44.413973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-11-06 14:08:44.414265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.295 [2024-11-06 14:08:44.414272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-11-06 14:08:44.414655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.295 [2024-11-06 14:08:44.414661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-11-06 14:08:44.415075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.415081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.415277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.415284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.415490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.415497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.415832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.415840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.416172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.416179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.416544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.416551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.416872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.416878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.417165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.417172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.417495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.417502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.417872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.417879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.418052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.418059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.418431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.418439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.418763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.418770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.419073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.419080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.419452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.419460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.419777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.419784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.420178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.420185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.420490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.420497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.420824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.420831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.421123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.421129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.421522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.421529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.421827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.421834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.422037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.422043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.422259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.422266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.422400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.422406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.422701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.422709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.422872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.422879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.423141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.423148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.423473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.423480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.423810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.423816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.424126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.424132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.424538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.424545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.424872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.424879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.424916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.424923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.425111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.425118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.425434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.425441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.425584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.425590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.425805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.425813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-11-06 14:08:44.426129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.296 [2024-11-06 14:08:44.426136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.426413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.426420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.426740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.426747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.427053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.427060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.427247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.427254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.427623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.427630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.427800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.427807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.428037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.428044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.428200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.428207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.428392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.428399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.428784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.428791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.429088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.429095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.429427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.429434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.429804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.429813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.429989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.429996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.430283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.430291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.430572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.430579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.430890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.430897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.430976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.430983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.431289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.431296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.431479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.431485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.431799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.431806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.432130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.432137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.432495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.432502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.432712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.432719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.432793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.432800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.433086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.433093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.433268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.433275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.433311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.433319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.433594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.433601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.433912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.433919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.434252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.434259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.434460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.434467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.434635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.434643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.434965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.434972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.435046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.435053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.435224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.435231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.435441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.435448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.297 [2024-11-06 14:08:44.435484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.297 [2024-11-06 14:08:44.435490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.297 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.435813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.435821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.436099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.436107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.436445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.436452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.436658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.436665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.437010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.437017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.437337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.437344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.437517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.437524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.437838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.437844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.438016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.438023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.438366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.438372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.438663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.438670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.438826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.438833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.439052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.439059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.439339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.439347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.439536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.439542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.439948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.439954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.440142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.440150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.440437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.440444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.440812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.440819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.441107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.441114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.441407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.441415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.441733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.441740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.442155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.442163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.442474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.442481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.442628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.442635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.442790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.442796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.443127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.443134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.443469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.443476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.443812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.443818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.444152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.444158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.444363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.444370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.444725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.444732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.445023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.445030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.445346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.445354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.445543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.445550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.445731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.445738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.445926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.445933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.446141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.298 [2024-11-06 14:08:44.446147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.298 qpair failed and we were unable to recover it. 00:25:05.298 [2024-11-06 14:08:44.446304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.446311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.446693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.446700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.446871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.446878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.447097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.447105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.447398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.447405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.447587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.447594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.447983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.447990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.448275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.448282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.448627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.448634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.448944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.448951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.449136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.449143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.449361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.449368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.449670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.449677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.449971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.449979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.450297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.450304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.450464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.450470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.450700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.450706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.450865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.450872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.451167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.451174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.451484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.451491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.451648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.451655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.452037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.452043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.452362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.452369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.452703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.452709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.453021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.453027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.453360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.453367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.453538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.453545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.453825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.453832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.454007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.454014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.454358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.454365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.454530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.454536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.454697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.454704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.454739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.454745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.454927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.454934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.455227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.455234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.455621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.455628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.455947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.455954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.456291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.456298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.299 qpair failed and we were unable to recover it. 00:25:05.299 [2024-11-06 14:08:44.456337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.299 [2024-11-06 14:08:44.456344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.300 qpair failed and we were unable to recover it. 00:25:05.300 [2024-11-06 14:08:44.456690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.300 [2024-11-06 14:08:44.456697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.300 qpair failed and we were unable to recover it. 00:25:05.300 [2024-11-06 14:08:44.456919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.300 [2024-11-06 14:08:44.456926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.300 qpair failed and we were unable to recover it. 00:25:05.300 [2024-11-06 14:08:44.457252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.300 [2024-11-06 14:08:44.457259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.300 qpair failed and we were unable to recover it. 00:25:05.300 [2024-11-06 14:08:44.457550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.300 [2024-11-06 14:08:44.457557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.300 qpair failed and we were unable to recover it. 00:25:05.300 [2024-11-06 14:08:44.457836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.300 [2024-11-06 14:08:44.457844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.300 qpair failed and we were unable to recover it. 00:25:05.300 [2024-11-06 14:08:44.458081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.300 [2024-11-06 14:08:44.458088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.300 qpair failed and we were unable to recover it. 00:25:05.300 [2024-11-06 14:08:44.458269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.300 [2024-11-06 14:08:44.458276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.300 qpair failed and we were unable to recover it. 00:25:05.300 [2024-11-06 14:08:44.458570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.300 [2024-11-06 14:08:44.458577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.300 qpair failed and we were unable to recover it. 00:25:05.300 [2024-11-06 14:08:44.458899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.300 [2024-11-06 14:08:44.458906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.300 qpair failed and we were unable to recover it. 00:25:05.300 [2024-11-06 14:08:44.459205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.300 [2024-11-06 14:08:44.459211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.300 qpair failed and we were unable to recover it. 00:25:05.300 [2024-11-06 14:08:44.459396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.300 [2024-11-06 14:08:44.459403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.300 qpair failed and we were unable to recover it. 00:25:05.300 [2024-11-06 14:08:44.459725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.300 [2024-11-06 14:08:44.459732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.300 qpair failed and we were unable to recover it. 00:25:05.300 [2024-11-06 14:08:44.459900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.300 [2024-11-06 14:08:44.459907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.300 qpair failed and we were unable to recover it. 00:25:05.300 [2024-11-06 14:08:44.460231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.300 [2024-11-06 14:08:44.460238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.300 qpair failed and we were unable to recover it. 00:25:05.300 [2024-11-06 14:08:44.460432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.300 [2024-11-06 14:08:44.460439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.300 qpair failed and we were unable to recover it. 00:25:05.300 [2024-11-06 14:08:44.460696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.300 [2024-11-06 14:08:44.460703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.300 qpair failed and we were unable to recover it. 00:25:05.300 [2024-11-06 14:08:44.460993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.300 [2024-11-06 14:08:44.460999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.300 qpair failed and we were unable to recover it. 00:25:05.300 [2024-11-06 14:08:44.461131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.300 [2024-11-06 14:08:44.461138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.300 qpair failed and we were unable to recover it. 00:25:05.300 [2024-11-06 14:08:44.461362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.300 [2024-11-06 14:08:44.461370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.300 qpair failed and we were unable to recover it. 00:25:05.300 [2024-11-06 14:08:44.461555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.300 [2024-11-06 14:08:44.461562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.300 qpair failed and we were unable to recover it. 00:25:05.300 [2024-11-06 14:08:44.461730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.300 [2024-11-06 14:08:44.461736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.300 qpair failed and we were unable to recover it. 00:25:05.300 [2024-11-06 14:08:44.462129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.300 [2024-11-06 14:08:44.462136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.300 qpair failed and we were unable to recover it. 00:25:05.300 [2024-11-06 14:08:44.462503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.300 [2024-11-06 14:08:44.462511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.300 qpair failed and we were unable to recover it. 00:25:05.300 [2024-11-06 14:08:44.462844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.300 [2024-11-06 14:08:44.462851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.300 qpair failed and we were unable to recover it. 00:25:05.300 [2024-11-06 14:08:44.463125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.300 [2024-11-06 14:08:44.463132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.300 qpair failed and we were unable to recover it. 00:25:05.300 [2024-11-06 14:08:44.463411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.300 [2024-11-06 14:08:44.463418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.300 qpair failed and we were unable to recover it. 00:25:05.300 [2024-11-06 14:08:44.463592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.300 [2024-11-06 14:08:44.463598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.300 qpair failed and we were unable to recover it. 00:25:05.300 [2024-11-06 14:08:44.463886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.300 [2024-11-06 14:08:44.463893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.300 qpair failed and we were unable to recover it. 00:25:05.300 [2024-11-06 14:08:44.464206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.300 [2024-11-06 14:08:44.464213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.300 qpair failed and we were unable to recover it. 00:25:05.300 [2024-11-06 14:08:44.464518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.300 [2024-11-06 14:08:44.464525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.300 qpair failed and we were unable to recover it. 00:25:05.300 [2024-11-06 14:08:44.464719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.300 [2024-11-06 14:08:44.464726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.300 qpair failed and we were unable to recover it. 00:25:05.300 [2024-11-06 14:08:44.465093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.300 [2024-11-06 14:08:44.465100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.300 qpair failed and we were unable to recover it. 00:25:05.300 [2024-11-06 14:08:44.465274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.300 [2024-11-06 14:08:44.465282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.300 qpair failed and we were unable to recover it. 00:25:05.300 [2024-11-06 14:08:44.465566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.300 [2024-11-06 14:08:44.465573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.465881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.465888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.466108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.466119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.466423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.466430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.466464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.466471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.466686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.466693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.466984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.466991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.467400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.467408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.467592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.467599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.467803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.467810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.468000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.468007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.468316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.468325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.468634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.468641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.468976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.468984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.469153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.469160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.469322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.469330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.469626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.469633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.469831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.469838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.470118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.470125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.470316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.470323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.470621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.470628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.470796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.470803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.471163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.471170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.471329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.471336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.471706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.471713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.471919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.471926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.472265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.472272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.472640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.472647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.472851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.472858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.473176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.473182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.473488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.473495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.473814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.473821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.474121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.474128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.474411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.474418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.474742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.474749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.475148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.475155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.475473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.475480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.475769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.475776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.301 [2024-11-06 14:08:44.476000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.301 [2024-11-06 14:08:44.476007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.301 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.476382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.476389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.476579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.476586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.476892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.476899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.477222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.477228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.477529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.477536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.477955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.477962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.478143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.478150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.478526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.478533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.478867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.478874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.479175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.479181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.479489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.479497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.479811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.479819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.480008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.480018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.480168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.480175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.480530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.480537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.480863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.480871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.481116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.481124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.481430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.481437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.481777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.481784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.482090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.482097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.482384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.482392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.482582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.482589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.482766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.482773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.483097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.483105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.483410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.483417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.483604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.483611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.483937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.483945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.484378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.484386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.484686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.484694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.485017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.485024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.485324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.485331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.485666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.485674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.485986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.485993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.486303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.486310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.486506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.486513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.486793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.486801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.487072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.487080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.487223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.302 [2024-11-06 14:08:44.487231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.302 qpair failed and we were unable to recover it. 00:25:05.302 [2024-11-06 14:08:44.487566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.487575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.487745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.487753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.488054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.488061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.488285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.488292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.488586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.488593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.488730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.488737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.488911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.488918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.489076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.489083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.489117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.489124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.489284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.489291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.489616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.489623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.489971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.489978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.490324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.490331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.490728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.490735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.490917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.490925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.491169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.491176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.491503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.491511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.491838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.491845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.492141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.492148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.492357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.492364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.492603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.492610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.492643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.492650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.492976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.492983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.493321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.493329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.493526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.493533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.493970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.493977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.494161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.494168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.494370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.494378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.494698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.494705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.494985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.494992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.495165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.495172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.495329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.495336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.495672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.495680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.496002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.496009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.496300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.496307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.496494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.496503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.496906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.496913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.497224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.497231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-11-06 14:08:44.497541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.303 [2024-11-06 14:08:44.497548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-11-06 14:08:44.497839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.304 [2024-11-06 14:08:44.497846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-11-06 14:08:44.498160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.304 [2024-11-06 14:08:44.498167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-11-06 14:08:44.498360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.304 [2024-11-06 14:08:44.498367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-11-06 14:08:44.498745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.304 [2024-11-06 14:08:44.498753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-11-06 14:08:44.499049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.304 [2024-11-06 14:08:44.499057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-11-06 14:08:44.499242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.304 [2024-11-06 14:08:44.499253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-11-06 14:08:44.499586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.304 [2024-11-06 14:08:44.499593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-11-06 14:08:44.499927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.304 [2024-11-06 14:08:44.499934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-11-06 14:08:44.500094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.304 [2024-11-06 14:08:44.500102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-11-06 14:08:44.500466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.304 [2024-11-06 14:08:44.500473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-11-06 14:08:44.500817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.304 [2024-11-06 14:08:44.500825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-11-06 14:08:44.500986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.304 [2024-11-06 14:08:44.500994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-11-06 14:08:44.501377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.304 [2024-11-06 14:08:44.501385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-11-06 14:08:44.501425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.304 [2024-11-06 14:08:44.501431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-11-06 14:08:44.501793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.304 [2024-11-06 14:08:44.501800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-11-06 14:08:44.502092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.304 [2024-11-06 14:08:44.502102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-11-06 14:08:44.502414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.304 [2024-11-06 14:08:44.502421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-11-06 14:08:44.502599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.304 [2024-11-06 14:08:44.502606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-11-06 14:08:44.502829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.304 [2024-11-06 14:08:44.502837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-11-06 14:08:44.503203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.304 [2024-11-06 14:08:44.503211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-11-06 14:08:44.503528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.304 [2024-11-06 14:08:44.503537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-11-06 14:08:44.503705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.304 [2024-11-06 14:08:44.503712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-11-06 14:08:44.504026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.304 [2024-11-06 14:08:44.504033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-11-06 14:08:44.504362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.304 [2024-11-06 14:08:44.504370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-11-06 14:08:44.504601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.304 [2024-11-06 14:08:44.504607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-11-06 14:08:44.504936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.304 [2024-11-06 14:08:44.504942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-11-06 14:08:44.505250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.304 [2024-11-06 14:08:44.505258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-11-06 14:08:44.505440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.304 [2024-11-06 14:08:44.505448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-11-06 14:08:44.505681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.304 [2024-11-06 14:08:44.505688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-11-06 14:08:44.506043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.304 [2024-11-06 14:08:44.506052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-11-06 14:08:44.506361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.304 [2024-11-06 14:08:44.506369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-11-06 14:08:44.506720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.304 [2024-11-06 14:08:44.506727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-11-06 14:08:44.507020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.304 [2024-11-06 14:08:44.507027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-11-06 14:08:44.507351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.304 [2024-11-06 14:08:44.507359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.507681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.305 [2024-11-06 14:08:44.507688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.507966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.305 [2024-11-06 14:08:44.507973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.508301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.305 [2024-11-06 14:08:44.508309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.508626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.305 [2024-11-06 14:08:44.508633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.508937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.305 [2024-11-06 14:08:44.508944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.509292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.305 [2024-11-06 14:08:44.509301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.509460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.305 [2024-11-06 14:08:44.509467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.509656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.305 [2024-11-06 14:08:44.509664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.509982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.305 [2024-11-06 14:08:44.509990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.510145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.305 [2024-11-06 14:08:44.510153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.510442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.305 [2024-11-06 14:08:44.510449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.510602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.305 [2024-11-06 14:08:44.510609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.510919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.305 [2024-11-06 14:08:44.510927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.511234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.305 [2024-11-06 14:08:44.511242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.511524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.305 [2024-11-06 14:08:44.511531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.511815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.305 [2024-11-06 14:08:44.511822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.512002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.305 [2024-11-06 14:08:44.512009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.512375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.305 [2024-11-06 14:08:44.512382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.512557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.305 [2024-11-06 14:08:44.512564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.512924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.305 [2024-11-06 14:08:44.512931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.513083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.305 [2024-11-06 14:08:44.513090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.513242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.305 [2024-11-06 14:08:44.513253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.513440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.305 [2024-11-06 14:08:44.513447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.513483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.305 [2024-11-06 14:08:44.513489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.513883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.305 [2024-11-06 14:08:44.513892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.514194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.305 [2024-11-06 14:08:44.514202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.514571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.305 [2024-11-06 14:08:44.514578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.514900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.305 [2024-11-06 14:08:44.514907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.515023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.305 [2024-11-06 14:08:44.515030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.515409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.305 [2024-11-06 14:08:44.515416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.515451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.305 [2024-11-06 14:08:44.515457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.515553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.305 [2024-11-06 14:08:44.515560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.515722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.305 [2024-11-06 14:08:44.515729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.516022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.305 [2024-11-06 14:08:44.516029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.516365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.305 [2024-11-06 14:08:44.516372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.516552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.305 [2024-11-06 14:08:44.516560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.516863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.305 [2024-11-06 14:08:44.516870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-11-06 14:08:44.517178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.517185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.517366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.517374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.517678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.517685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.517875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.517882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.518192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.518199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.518431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.518439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.518818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.518825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.519125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.519132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.519502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.519509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.519675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.519682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.519926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.519933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.520238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.520249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.520639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.520647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.520934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.520941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.521305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.521313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.521695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.521702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.521872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.521879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.522192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.522199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.522362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.522370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.522731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.522738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.522904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.522911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.523129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.523136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.523308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.523315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.523614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.523621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.523981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.523990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.524298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.524306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.524622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.524629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.524944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.524951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.525316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.525323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.525668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.525676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.526028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.526036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.526344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.526352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.526655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.526663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.526904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.526911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.527151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.527158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.527366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.527373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.527535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.527542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.527851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.527859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.306 qpair failed and we were unable to recover it. 00:25:05.306 [2024-11-06 14:08:44.528047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.306 [2024-11-06 14:08:44.528054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.528264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.528271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.528553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.528560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.528975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.528983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.529165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.529172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.529470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.529478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.529790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.529798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.530123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.530131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.530468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.530476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.530835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.530842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.531172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.531179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.531521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.531528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.531831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.531839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.532039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.532046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.532423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.532432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.532740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.532747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.532932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.532940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.533133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.533140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.533505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.533513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.533713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.533719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.534184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.534190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.534571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.534579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.534901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.534908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.535213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.535220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.535592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.535599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.535800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.535808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.536101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.536110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.536427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.536435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.536620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.536628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.536794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.536802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.537139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.537147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.537321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.537329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.537488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.537496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.537792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.537799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.538198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.538205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.538356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.538363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.538587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.538594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.538923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.538931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.307 [2024-11-06 14:08:44.539255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.307 [2024-11-06 14:08:44.539262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.307 qpair failed and we were unable to recover it. 00:25:05.308 [2024-11-06 14:08:44.539299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.308 [2024-11-06 14:08:44.539306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.308 qpair failed and we were unable to recover it. 00:25:05.308 [2024-11-06 14:08:44.539465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.308 [2024-11-06 14:08:44.539472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.308 qpair failed and we were unable to recover it. 00:25:05.308 [2024-11-06 14:08:44.539718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.308 [2024-11-06 14:08:44.539724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.308 qpair failed and we were unable to recover it. 00:25:05.308 [2024-11-06 14:08:44.540022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.308 [2024-11-06 14:08:44.540028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.308 qpair failed and we were unable to recover it. 00:25:05.308 [2024-11-06 14:08:44.540385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.308 [2024-11-06 14:08:44.540392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.308 qpair failed and we were unable to recover it. 00:25:05.308 [2024-11-06 14:08:44.540775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.308 [2024-11-06 14:08:44.540782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.308 qpair failed and we were unable to recover it. 00:25:05.308 [2024-11-06 14:08:44.541160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.308 [2024-11-06 14:08:44.541167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.308 qpair failed and we were unable to recover it. 00:25:05.584 [2024-11-06 14:08:44.541581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-11-06 14:08:44.541589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-11-06 14:08:44.541904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-11-06 14:08:44.541911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-11-06 14:08:44.542254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-11-06 14:08:44.542261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-11-06 14:08:44.542568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-11-06 14:08:44.542575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-11-06 14:08:44.542887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-11-06 14:08:44.542894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-11-06 14:08:44.542932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-11-06 14:08:44.542939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-11-06 14:08:44.543208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-11-06 14:08:44.543215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-11-06 14:08:44.543363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-11-06 14:08:44.543370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-11-06 14:08:44.543695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-11-06 14:08:44.543702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-11-06 14:08:44.544048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-11-06 14:08:44.544055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-11-06 14:08:44.544378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-11-06 14:08:44.544385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-11-06 14:08:44.544692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-11-06 14:08:44.544699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-11-06 14:08:44.545023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-11-06 14:08:44.545029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-11-06 14:08:44.545207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-11-06 14:08:44.545214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.584 qpair failed and we were unable to recover it. 00:25:05.584 [2024-11-06 14:08:44.545279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.584 [2024-11-06 14:08:44.545286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-11-06 14:08:44.545528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-11-06 14:08:44.545535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-11-06 14:08:44.545923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-11-06 14:08:44.545931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-11-06 14:08:44.546124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-11-06 14:08:44.546132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-11-06 14:08:44.546471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-11-06 14:08:44.546479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-11-06 14:08:44.546632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-11-06 14:08:44.546639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-11-06 14:08:44.546822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-11-06 14:08:44.546832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-11-06 14:08:44.547199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-11-06 14:08:44.547206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-11-06 14:08:44.547396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-11-06 14:08:44.547403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-11-06 14:08:44.547636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-11-06 14:08:44.547643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-11-06 14:08:44.548033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-11-06 14:08:44.548040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-11-06 14:08:44.548076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-11-06 14:08:44.548082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-11-06 14:08:44.548374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-11-06 14:08:44.548381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-11-06 14:08:44.548703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-11-06 14:08:44.548710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-11-06 14:08:44.549038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-11-06 14:08:44.549045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-11-06 14:08:44.549246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-11-06 14:08:44.549254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-11-06 14:08:44.549633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-11-06 14:08:44.549640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-11-06 14:08:44.549934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-11-06 14:08:44.549942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-11-06 14:08:44.550130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-11-06 14:08:44.550137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-11-06 14:08:44.550508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-11-06 14:08:44.550516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-11-06 14:08:44.550806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-11-06 14:08:44.550814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-11-06 14:08:44.551138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-11-06 14:08:44.551145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-11-06 14:08:44.551510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-11-06 14:08:44.551518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-11-06 14:08:44.551835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-11-06 14:08:44.551842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-11-06 14:08:44.551975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-11-06 14:08:44.551983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-11-06 14:08:44.552275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-11-06 14:08:44.552283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-11-06 14:08:44.552565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-11-06 14:08:44.552572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-11-06 14:08:44.552610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-11-06 14:08:44.552617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-11-06 14:08:44.552766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-11-06 14:08:44.552775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-11-06 14:08:44.552958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-11-06 14:08:44.552965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-11-06 14:08:44.553344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-11-06 14:08:44.553351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-11-06 14:08:44.553666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-11-06 14:08:44.553673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-11-06 14:08:44.553890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-11-06 14:08:44.553897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-11-06 14:08:44.554072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-11-06 14:08:44.554080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-11-06 14:08:44.554363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-11-06 14:08:44.554370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-11-06 14:08:44.554561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.585 [2024-11-06 14:08:44.554568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.585 qpair failed and we were unable to recover it. 00:25:05.585 [2024-11-06 14:08:44.554794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-11-06 14:08:44.554801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-11-06 14:08:44.555103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-11-06 14:08:44.555110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-11-06 14:08:44.555434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-11-06 14:08:44.555442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-11-06 14:08:44.555790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-11-06 14:08:44.555797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-11-06 14:08:44.556109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-11-06 14:08:44.556115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-11-06 14:08:44.556306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-11-06 14:08:44.556313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-11-06 14:08:44.556573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-11-06 14:08:44.556580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-11-06 14:08:44.556866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-11-06 14:08:44.556872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-11-06 14:08:44.557061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-11-06 14:08:44.557068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-11-06 14:08:44.557380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-11-06 14:08:44.557388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-11-06 14:08:44.557677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-11-06 14:08:44.557686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-11-06 14:08:44.558002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-11-06 14:08:44.558008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-11-06 14:08:44.558199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-11-06 14:08:44.558205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-11-06 14:08:44.558543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-11-06 14:08:44.558551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-11-06 14:08:44.558758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-11-06 14:08:44.558765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-11-06 14:08:44.559198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-11-06 14:08:44.559205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-11-06 14:08:44.559368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-11-06 14:08:44.559375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-11-06 14:08:44.559663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-11-06 14:08:44.559670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-11-06 14:08:44.559940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-11-06 14:08:44.559947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-11-06 14:08:44.560137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-11-06 14:08:44.560145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-11-06 14:08:44.560368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-11-06 14:08:44.560375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-11-06 14:08:44.560675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-11-06 14:08:44.560681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-11-06 14:08:44.561066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-11-06 14:08:44.561073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-11-06 14:08:44.561470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-11-06 14:08:44.561477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-11-06 14:08:44.561777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-11-06 14:08:44.561784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-11-06 14:08:44.561967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-11-06 14:08:44.561974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-11-06 14:08:44.562186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-11-06 14:08:44.562193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-11-06 14:08:44.562486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-11-06 14:08:44.562494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-11-06 14:08:44.562812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-11-06 14:08:44.562820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-11-06 14:08:44.563180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-11-06 14:08:44.563187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-11-06 14:08:44.563497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-11-06 14:08:44.563504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-11-06 14:08:44.563825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-11-06 14:08:44.563832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-11-06 14:08:44.564141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-11-06 14:08:44.564148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-11-06 14:08:44.564537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.586 [2024-11-06 14:08:44.564544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.586 qpair failed and we were unable to recover it. 00:25:05.586 [2024-11-06 14:08:44.564837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-11-06 14:08:44.564844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-11-06 14:08:44.565133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-11-06 14:08:44.565140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-11-06 14:08:44.565477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-11-06 14:08:44.565484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-11-06 14:08:44.565808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-11-06 14:08:44.565816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-11-06 14:08:44.566120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-11-06 14:08:44.566127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-11-06 14:08:44.566297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-11-06 14:08:44.566305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-11-06 14:08:44.566601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-11-06 14:08:44.566608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-11-06 14:08:44.566795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-11-06 14:08:44.566802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-11-06 14:08:44.567134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-11-06 14:08:44.567141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-11-06 14:08:44.567520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-11-06 14:08:44.567528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-11-06 14:08:44.567841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-11-06 14:08:44.567847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-11-06 14:08:44.568175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-11-06 14:08:44.568181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-11-06 14:08:44.568360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-11-06 14:08:44.568367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-11-06 14:08:44.568616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-11-06 14:08:44.568622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-11-06 14:08:44.568984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-11-06 14:08:44.568992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-11-06 14:08:44.569150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-11-06 14:08:44.569159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-11-06 14:08:44.569530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-11-06 14:08:44.569540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-11-06 14:08:44.569898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-11-06 14:08:44.569905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-11-06 14:08:44.570219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-11-06 14:08:44.570225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-11-06 14:08:44.570518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-11-06 14:08:44.570525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-11-06 14:08:44.570873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-11-06 14:08:44.570881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-11-06 14:08:44.571203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-11-06 14:08:44.571209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-11-06 14:08:44.571589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-11-06 14:08:44.571596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-11-06 14:08:44.571764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-11-06 14:08:44.571771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-11-06 14:08:44.571949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-11-06 14:08:44.571956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-11-06 14:08:44.572286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-11-06 14:08:44.572294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-11-06 14:08:44.572624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-11-06 14:08:44.572632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-11-06 14:08:44.572820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-11-06 14:08:44.572828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-11-06 14:08:44.572967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-11-06 14:08:44.572975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-11-06 14:08:44.573268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-11-06 14:08:44.573276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-11-06 14:08:44.573603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-11-06 14:08:44.573610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-11-06 14:08:44.573783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.587 [2024-11-06 14:08:44.573790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.587 qpair failed and we were unable to recover it. 00:25:05.587 [2024-11-06 14:08:44.574062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-11-06 14:08:44.574069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-11-06 14:08:44.574393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-11-06 14:08:44.574401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-11-06 14:08:44.574557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-11-06 14:08:44.574563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-11-06 14:08:44.574935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-11-06 14:08:44.574942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-11-06 14:08:44.575283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-11-06 14:08:44.575290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-11-06 14:08:44.575603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-11-06 14:08:44.575610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-11-06 14:08:44.575947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-11-06 14:08:44.575953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-11-06 14:08:44.576122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-11-06 14:08:44.576130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-11-06 14:08:44.576491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-11-06 14:08:44.576498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-11-06 14:08:44.576655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-11-06 14:08:44.576662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-11-06 14:08:44.576945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-11-06 14:08:44.576951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-11-06 14:08:44.577256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-11-06 14:08:44.577264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-11-06 14:08:44.577467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-11-06 14:08:44.577474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-11-06 14:08:44.577814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-11-06 14:08:44.577822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-11-06 14:08:44.578127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-11-06 14:08:44.578134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-11-06 14:08:44.578317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-11-06 14:08:44.578324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-11-06 14:08:44.578674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-11-06 14:08:44.578680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-11-06 14:08:44.578853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-11-06 14:08:44.578860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-11-06 14:08:44.579157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-11-06 14:08:44.579164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-11-06 14:08:44.579473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-11-06 14:08:44.579480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-11-06 14:08:44.579804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-11-06 14:08:44.579811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-11-06 14:08:44.580124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-11-06 14:08:44.580130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-11-06 14:08:44.580465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-11-06 14:08:44.580472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-11-06 14:08:44.580788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-11-06 14:08:44.580796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-11-06 14:08:44.581118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-11-06 14:08:44.581124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-11-06 14:08:44.581298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-11-06 14:08:44.581305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-11-06 14:08:44.581665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-11-06 14:08:44.581673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-11-06 14:08:44.581997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-11-06 14:08:44.582003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-11-06 14:08:44.582162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-11-06 14:08:44.582170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-11-06 14:08:44.582427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-11-06 14:08:44.582435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-11-06 14:08:44.582777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-11-06 14:08:44.582784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-11-06 14:08:44.583110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-11-06 14:08:44.583117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-11-06 14:08:44.583438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-11-06 14:08:44.583446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-11-06 14:08:44.583743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-11-06 14:08:44.583750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-11-06 14:08:44.583908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-11-06 14:08:44.583915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.588 [2024-11-06 14:08:44.584232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.588 [2024-11-06 14:08:44.584238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.588 qpair failed and we were unable to recover it. 00:25:05.589 [2024-11-06 14:08:44.584396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-11-06 14:08:44.584403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-11-06 14:08:44.584598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-11-06 14:08:44.584605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-11-06 14:08:44.584880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-11-06 14:08:44.584887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-11-06 14:08:44.585068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-11-06 14:08:44.585076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-11-06 14:08:44.585486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-11-06 14:08:44.585493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-11-06 14:08:44.585661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-11-06 14:08:44.585668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-11-06 14:08:44.585941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-11-06 14:08:44.585947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-11-06 14:08:44.586106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-11-06 14:08:44.586112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-11-06 14:08:44.586394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-11-06 14:08:44.586401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-11-06 14:08:44.586743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-11-06 14:08:44.586749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-11-06 14:08:44.587066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-11-06 14:08:44.587073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-11-06 14:08:44.587406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-11-06 14:08:44.587414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-11-06 14:08:44.587703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-11-06 14:08:44.587711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-11-06 14:08:44.588066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-11-06 14:08:44.588073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-11-06 14:08:44.588461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-11-06 14:08:44.588469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-11-06 14:08:44.588626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-11-06 14:08:44.588635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-11-06 14:08:44.588932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-11-06 14:08:44.588939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-11-06 14:08:44.589223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-11-06 14:08:44.589230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-11-06 14:08:44.589548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-11-06 14:08:44.589556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-11-06 14:08:44.589918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-11-06 14:08:44.589925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-11-06 14:08:44.590093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-11-06 14:08:44.590101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-11-06 14:08:44.590259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-11-06 14:08:44.590267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-11-06 14:08:44.590564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-11-06 14:08:44.590571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-11-06 14:08:44.590777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-11-06 14:08:44.590784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-11-06 14:08:44.591114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-11-06 14:08:44.591122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-11-06 14:08:44.591312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-11-06 14:08:44.591321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-11-06 14:08:44.591662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-11-06 14:08:44.591670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-11-06 14:08:44.591879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-11-06 14:08:44.591887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-11-06 14:08:44.592279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-11-06 14:08:44.592287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-11-06 14:08:44.592608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-11-06 14:08:44.592615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.589 [2024-11-06 14:08:44.592810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.589 [2024-11-06 14:08:44.592818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.589 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.593139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-11-06 14:08:44.593146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.593509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-11-06 14:08:44.593517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.593772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-11-06 14:08:44.593780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.594097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-11-06 14:08:44.594104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.594432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-11-06 14:08:44.594440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.594757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-11-06 14:08:44.594764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.595152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-11-06 14:08:44.595159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.595382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-11-06 14:08:44.595389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.595696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-11-06 14:08:44.595703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.595884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-11-06 14:08:44.595891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.596204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-11-06 14:08:44.596212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.596384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-11-06 14:08:44.596392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.596731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-11-06 14:08:44.596739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.597076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-11-06 14:08:44.597083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.597405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-11-06 14:08:44.597412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.597572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-11-06 14:08:44.597579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.597807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-11-06 14:08:44.597814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.598094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-11-06 14:08:44.598102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.598428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-11-06 14:08:44.598435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.598813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-11-06 14:08:44.598819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.599119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-11-06 14:08:44.599126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.599476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-11-06 14:08:44.599484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.599663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-11-06 14:08:44.599670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.599916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-11-06 14:08:44.599923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.600120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-11-06 14:08:44.600130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.600445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-11-06 14:08:44.600453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.600635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-11-06 14:08:44.600642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.600985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-11-06 14:08:44.600992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.601165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-11-06 14:08:44.601171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.601371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-11-06 14:08:44.601379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.601666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-11-06 14:08:44.601673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.601994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-11-06 14:08:44.602001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.602185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-11-06 14:08:44.602191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.602486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-11-06 14:08:44.602493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.602661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-11-06 14:08:44.602667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.602987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-11-06 14:08:44.602994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.603164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.590 [2024-11-06 14:08:44.603172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.590 qpair failed and we were unable to recover it. 00:25:05.590 [2024-11-06 14:08:44.603345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-11-06 14:08:44.603353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-11-06 14:08:44.603666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-11-06 14:08:44.603673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-11-06 14:08:44.604064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-11-06 14:08:44.604071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-11-06 14:08:44.604368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-11-06 14:08:44.604375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-11-06 14:08:44.604663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-11-06 14:08:44.604670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-11-06 14:08:44.604799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-11-06 14:08:44.604806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-11-06 14:08:44.605020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-11-06 14:08:44.605028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-11-06 14:08:44.605307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-11-06 14:08:44.605315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-11-06 14:08:44.605601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-11-06 14:08:44.605607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-11-06 14:08:44.605823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-11-06 14:08:44.605829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-11-06 14:08:44.606129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-11-06 14:08:44.606137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-11-06 14:08:44.606503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-11-06 14:08:44.606510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-11-06 14:08:44.606817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-11-06 14:08:44.606824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-11-06 14:08:44.607136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-11-06 14:08:44.607143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-11-06 14:08:44.607317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-11-06 14:08:44.607324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-11-06 14:08:44.607557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-11-06 14:08:44.607564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-11-06 14:08:44.607898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-11-06 14:08:44.607906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-11-06 14:08:44.608250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-11-06 14:08:44.608257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-11-06 14:08:44.608546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-11-06 14:08:44.608553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-11-06 14:08:44.608880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-11-06 14:08:44.608887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-11-06 14:08:44.609047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-11-06 14:08:44.609054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-11-06 14:08:44.609249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-11-06 14:08:44.609257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-11-06 14:08:44.609620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-11-06 14:08:44.609627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-11-06 14:08:44.609911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-11-06 14:08:44.609918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-11-06 14:08:44.610227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-11-06 14:08:44.610235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-11-06 14:08:44.610532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-11-06 14:08:44.610539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-11-06 14:08:44.610848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-11-06 14:08:44.610855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-11-06 14:08:44.611028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-11-06 14:08:44.611036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-11-06 14:08:44.611414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-11-06 14:08:44.611422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-11-06 14:08:44.611590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-11-06 14:08:44.611597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-11-06 14:08:44.611900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-11-06 14:08:44.611908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-11-06 14:08:44.612208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-11-06 14:08:44.612215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-11-06 14:08:44.612426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-11-06 14:08:44.612433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.591 qpair failed and we were unable to recover it. 00:25:05.591 [2024-11-06 14:08:44.612822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.612829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.613151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.613157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.613547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.613555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.613804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.613811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.614126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.614134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.614318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.614326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.614640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.614647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.614826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.614834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.614992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.614999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.615307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.615314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.615506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.615513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.615864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.615870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.616181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.616188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.616384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.616392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.616850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.616857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.617139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.617146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.617319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.617326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.617663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.617670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.618025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.618031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.618372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.618379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.618554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.618561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.618849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.618856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.619144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.619151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.619463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.619470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.619811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.619818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.620133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.620141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.620386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.620395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.620707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.620715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.620878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.620885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.621045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.621052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.621253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.621260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.621579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.621586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.621750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.621757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.622138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.622145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.622308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.622318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.622558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.622566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.622752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.622760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.622799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.592 [2024-11-06 14:08:44.622806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.592 qpair failed and we were unable to recover it. 00:25:05.592 [2024-11-06 14:08:44.623096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.623102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.623482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.623490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.623833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.623840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.624172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.624179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.624512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.624519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.624849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.624856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.625153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.625159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.625491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.625498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.625683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.625691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.625893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.625901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.626304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.626312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.626500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.626507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.626856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.626862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.627183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.627191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.627225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.627231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.627416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.627423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.627612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.627619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.627790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.627797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.628071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.628078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.628236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.628251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.628433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.628440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.628633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.628642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.628979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.628987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.629177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.629183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.629469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.629476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.629525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.629533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.629707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.629714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.630052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.630059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.630346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.630354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.630533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.630540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.630689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.630696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.631021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.631028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.631316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.631323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.631658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.631664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.631851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.631859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.632199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.632207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.632505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.632514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.632718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.593 [2024-11-06 14:08:44.632725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.593 qpair failed and we were unable to recover it. 00:25:05.593 [2024-11-06 14:08:44.633076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-11-06 14:08:44.633083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-11-06 14:08:44.633378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-11-06 14:08:44.633384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-11-06 14:08:44.633693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-11-06 14:08:44.633701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-11-06 14:08:44.634035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-11-06 14:08:44.634042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-11-06 14:08:44.634234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-11-06 14:08:44.634242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-11-06 14:08:44.634424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-11-06 14:08:44.634431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-11-06 14:08:44.634679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-11-06 14:08:44.634686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-11-06 14:08:44.635000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-11-06 14:08:44.635007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-11-06 14:08:44.635177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-11-06 14:08:44.635184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-11-06 14:08:44.635411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-11-06 14:08:44.635419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-11-06 14:08:44.635735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-11-06 14:08:44.635743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-11-06 14:08:44.636010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-11-06 14:08:44.636018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-11-06 14:08:44.636334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-11-06 14:08:44.636341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-11-06 14:08:44.636668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-11-06 14:08:44.636675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-11-06 14:08:44.636956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-11-06 14:08:44.636963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-11-06 14:08:44.637180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-11-06 14:08:44.637186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-11-06 14:08:44.637366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-11-06 14:08:44.637374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-11-06 14:08:44.637655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-11-06 14:08:44.637662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-11-06 14:08:44.637848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-11-06 14:08:44.637856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-11-06 14:08:44.638218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-11-06 14:08:44.638225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-11-06 14:08:44.638562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-11-06 14:08:44.638569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-11-06 14:08:44.638881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-11-06 14:08:44.638888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-11-06 14:08:44.639225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-11-06 14:08:44.639233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-11-06 14:08:44.639390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-11-06 14:08:44.639398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-11-06 14:08:44.639592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-11-06 14:08:44.639599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-11-06 14:08:44.639918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-11-06 14:08:44.639925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-11-06 14:08:44.640228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-11-06 14:08:44.640235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-11-06 14:08:44.640586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-11-06 14:08:44.640593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-11-06 14:08:44.640788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-11-06 14:08:44.640795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.594 qpair failed and we were unable to recover it. 00:25:05.594 [2024-11-06 14:08:44.641175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.594 [2024-11-06 14:08:44.641182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.641498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.641506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.641681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.641688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.641857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.641864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.642191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.642199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.642575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.642582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.642813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.642820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.643132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.643139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.643478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.643485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.643684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.643694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.643935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.643942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.644288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.644295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.644616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.644622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.644800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.644807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.644969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.644976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.645299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.645307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.645621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.645627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.645944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.645951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.646144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.646151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.646318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.646326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.646705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.646712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.646752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.646759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.647147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.647154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.647412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.647419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.647732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.647739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.648064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.648071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.648387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.648394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.648711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.648719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.648886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.648892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.649069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.649075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.649408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.649415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.649603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.649611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.649879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.649886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.650219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.650226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.650550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.650557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.650853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.650860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.651199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.651206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.651521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.651528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.595 qpair failed and we were unable to recover it. 00:25:05.595 [2024-11-06 14:08:44.651667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.595 [2024-11-06 14:08:44.651674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-11-06 14:08:44.651969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-11-06 14:08:44.651976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-11-06 14:08:44.652284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-11-06 14:08:44.652292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-11-06 14:08:44.652486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-11-06 14:08:44.652493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-11-06 14:08:44.652728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-11-06 14:08:44.652736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-11-06 14:08:44.652903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-11-06 14:08:44.652910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-11-06 14:08:44.653065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-11-06 14:08:44.653073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-11-06 14:08:44.653338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-11-06 14:08:44.653346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-11-06 14:08:44.653523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-11-06 14:08:44.653531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-11-06 14:08:44.653674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-11-06 14:08:44.653681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-11-06 14:08:44.653986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-11-06 14:08:44.653993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-11-06 14:08:44.654331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-11-06 14:08:44.654341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-11-06 14:08:44.654377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-11-06 14:08:44.654383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-11-06 14:08:44.654710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-11-06 14:08:44.654717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-11-06 14:08:44.654901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-11-06 14:08:44.654908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-11-06 14:08:44.655106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-11-06 14:08:44.655113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-11-06 14:08:44.655477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-11-06 14:08:44.655484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-11-06 14:08:44.655523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-11-06 14:08:44.655531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-11-06 14:08:44.655826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-11-06 14:08:44.655832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-11-06 14:08:44.656127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-11-06 14:08:44.656133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-11-06 14:08:44.656431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-11-06 14:08:44.656438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-11-06 14:08:44.656769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-11-06 14:08:44.656776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-11-06 14:08:44.657173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-11-06 14:08:44.657180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-11-06 14:08:44.657515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-11-06 14:08:44.657523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-11-06 14:08:44.657750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-11-06 14:08:44.657757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-11-06 14:08:44.657945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-11-06 14:08:44.657952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-11-06 14:08:44.657986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-11-06 14:08:44.657992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-11-06 14:08:44.658346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-11-06 14:08:44.658353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-11-06 14:08:44.658696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-11-06 14:08:44.658704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-11-06 14:08:44.658884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-11-06 14:08:44.658892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-11-06 14:08:44.659087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-11-06 14:08:44.659093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-11-06 14:08:44.659399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-11-06 14:08:44.659406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-11-06 14:08:44.659564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-11-06 14:08:44.659571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-11-06 14:08:44.659920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-11-06 14:08:44.659927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-11-06 14:08:44.660235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-11-06 14:08:44.660248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-11-06 14:08:44.660577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-11-06 14:08:44.660584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-11-06 14:08:44.660882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-11-06 14:08:44.660890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.596 [2024-11-06 14:08:44.660926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.596 [2024-11-06 14:08:44.660934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.596 qpair failed and we were unable to recover it. 00:25:05.597 [2024-11-06 14:08:44.661182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-11-06 14:08:44.661190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-11-06 14:08:44.661496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-11-06 14:08:44.661503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-11-06 14:08:44.661809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-11-06 14:08:44.661816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-11-06 14:08:44.662100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-11-06 14:08:44.662107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-11-06 14:08:44.662397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-11-06 14:08:44.662404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-11-06 14:08:44.662437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-11-06 14:08:44.662443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-11-06 14:08:44.662597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-11-06 14:08:44.662604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-11-06 14:08:44.662855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-11-06 14:08:44.662862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-11-06 14:08:44.663158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-11-06 14:08:44.663164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-11-06 14:08:44.663456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-11-06 14:08:44.663464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-11-06 14:08:44.663636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-11-06 14:08:44.663643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-11-06 14:08:44.663917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-11-06 14:08:44.663923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-11-06 14:08:44.664209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-11-06 14:08:44.664216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-11-06 14:08:44.664500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-11-06 14:08:44.664509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-11-06 14:08:44.664834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-11-06 14:08:44.664841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-11-06 14:08:44.665158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-11-06 14:08:44.665165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-11-06 14:08:44.665361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-11-06 14:08:44.665369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-11-06 14:08:44.665754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-11-06 14:08:44.665761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-11-06 14:08:44.666073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-11-06 14:08:44.666079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-11-06 14:08:44.666400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-11-06 14:08:44.666408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-11-06 14:08:44.666734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-11-06 14:08:44.666741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-11-06 14:08:44.667153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-11-06 14:08:44.667160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-11-06 14:08:44.667489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-11-06 14:08:44.667497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-11-06 14:08:44.667852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-11-06 14:08:44.667859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-11-06 14:08:44.668046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-11-06 14:08:44.668053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-11-06 14:08:44.668390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-11-06 14:08:44.668397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-11-06 14:08:44.668591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-11-06 14:08:44.668598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-11-06 14:08:44.668991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-11-06 14:08:44.668998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-11-06 14:08:44.669218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-11-06 14:08:44.669225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-11-06 14:08:44.669564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-11-06 14:08:44.669571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-11-06 14:08:44.669886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-11-06 14:08:44.669893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-11-06 14:08:44.669925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-11-06 14:08:44.669931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-11-06 14:08:44.670232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-11-06 14:08:44.670240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-11-06 14:08:44.670603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-11-06 14:08:44.670610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.597 qpair failed and we were unable to recover it. 00:25:05.597 [2024-11-06 14:08:44.670919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.597 [2024-11-06 14:08:44.670926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-11-06 14:08:44.671083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-11-06 14:08:44.671089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-11-06 14:08:44.671406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-11-06 14:08:44.671414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-11-06 14:08:44.671593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-11-06 14:08:44.671600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-11-06 14:08:44.671862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-11-06 14:08:44.671868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-11-06 14:08:44.672049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-11-06 14:08:44.672057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-11-06 14:08:44.672302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-11-06 14:08:44.672309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-11-06 14:08:44.672720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-11-06 14:08:44.672727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-11-06 14:08:44.672888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-11-06 14:08:44.672895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-11-06 14:08:44.673267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-11-06 14:08:44.673274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-11-06 14:08:44.673448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-11-06 14:08:44.673455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-11-06 14:08:44.673762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-11-06 14:08:44.673768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-11-06 14:08:44.673802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-11-06 14:08:44.673808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-11-06 14:08:44.674107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-11-06 14:08:44.674114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-11-06 14:08:44.674464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-11-06 14:08:44.674472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-11-06 14:08:44.674772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-11-06 14:08:44.674779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-11-06 14:08:44.674954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-11-06 14:08:44.674961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-11-06 14:08:44.675332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-11-06 14:08:44.675339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-11-06 14:08:44.675556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-11-06 14:08:44.675563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-11-06 14:08:44.675869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-11-06 14:08:44.675878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-11-06 14:08:44.676190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-11-06 14:08:44.676197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-11-06 14:08:44.676584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-11-06 14:08:44.676591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-11-06 14:08:44.676885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-11-06 14:08:44.676892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-11-06 14:08:44.677209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-11-06 14:08:44.677216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-11-06 14:08:44.677395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-11-06 14:08:44.677402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-11-06 14:08:44.677728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-11-06 14:08:44.677736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-11-06 14:08:44.677948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-11-06 14:08:44.677956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-11-06 14:08:44.678289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-11-06 14:08:44.678296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-11-06 14:08:44.678610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-11-06 14:08:44.678617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-11-06 14:08:44.678937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-11-06 14:08:44.678943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-11-06 14:08:44.679262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-11-06 14:08:44.679269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-11-06 14:08:44.679576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-11-06 14:08:44.679583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-11-06 14:08:44.679894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-11-06 14:08:44.679901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-11-06 14:08:44.680260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-11-06 14:08:44.680267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.598 [2024-11-06 14:08:44.680602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.598 [2024-11-06 14:08:44.680609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.598 qpair failed and we were unable to recover it. 00:25:05.599 [2024-11-06 14:08:44.680779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-11-06 14:08:44.680786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-11-06 14:08:44.681145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-11-06 14:08:44.681152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-11-06 14:08:44.681465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-11-06 14:08:44.681472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-11-06 14:08:44.681651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-11-06 14:08:44.681657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-11-06 14:08:44.682042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-11-06 14:08:44.682048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-11-06 14:08:44.682364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-11-06 14:08:44.682371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-11-06 14:08:44.682723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-11-06 14:08:44.682731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-11-06 14:08:44.682923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-11-06 14:08:44.682931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-11-06 14:08:44.683255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-11-06 14:08:44.683262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-11-06 14:08:44.683417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-11-06 14:08:44.683424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-11-06 14:08:44.683775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-11-06 14:08:44.683781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-11-06 14:08:44.684101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-11-06 14:08:44.684108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-11-06 14:08:44.684268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-11-06 14:08:44.684275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-11-06 14:08:44.684643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-11-06 14:08:44.684650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-11-06 14:08:44.684820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-11-06 14:08:44.684827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-11-06 14:08:44.685196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-11-06 14:08:44.685202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-11-06 14:08:44.685547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-11-06 14:08:44.685555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-11-06 14:08:44.685758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-11-06 14:08:44.685765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-11-06 14:08:44.685944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-11-06 14:08:44.685951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-11-06 14:08:44.686294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-11-06 14:08:44.686301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-11-06 14:08:44.686667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-11-06 14:08:44.686674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-11-06 14:08:44.687027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-11-06 14:08:44.687034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-11-06 14:08:44.687351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-11-06 14:08:44.687358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-11-06 14:08:44.687688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-11-06 14:08:44.687695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-11-06 14:08:44.687983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-11-06 14:08:44.687992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-11-06 14:08:44.688318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-11-06 14:08:44.688325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-11-06 14:08:44.688715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-11-06 14:08:44.688722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-11-06 14:08:44.689047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.599 [2024-11-06 14:08:44.689055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.599 qpair failed and we were unable to recover it. 00:25:05.599 [2024-11-06 14:08:44.689207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-11-06 14:08:44.689214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-11-06 14:08:44.689554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-11-06 14:08:44.689561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-11-06 14:08:44.689902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-11-06 14:08:44.689909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-11-06 14:08:44.690203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-11-06 14:08:44.690211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-11-06 14:08:44.690594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-11-06 14:08:44.690602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-11-06 14:08:44.690768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-11-06 14:08:44.690776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-11-06 14:08:44.691013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-11-06 14:08:44.691020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-11-06 14:08:44.691334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-11-06 14:08:44.691341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-11-06 14:08:44.691667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-11-06 14:08:44.691673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-11-06 14:08:44.691996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-11-06 14:08:44.692002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-11-06 14:08:44.692043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-11-06 14:08:44.692050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-11-06 14:08:44.692356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-11-06 14:08:44.692364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-11-06 14:08:44.692402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-11-06 14:08:44.692408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-11-06 14:08:44.692777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-11-06 14:08:44.692785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-11-06 14:08:44.693118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-11-06 14:08:44.693124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-11-06 14:08:44.693330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-11-06 14:08:44.693337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-11-06 14:08:44.693521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-11-06 14:08:44.693527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-11-06 14:08:44.693812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-11-06 14:08:44.693819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-11-06 14:08:44.693996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-11-06 14:08:44.694002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-11-06 14:08:44.694313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-11-06 14:08:44.694322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-11-06 14:08:44.694663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-11-06 14:08:44.694670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-11-06 14:08:44.694963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-11-06 14:08:44.694971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-11-06 14:08:44.695285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-11-06 14:08:44.695292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-11-06 14:08:44.695647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-11-06 14:08:44.695654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-11-06 14:08:44.695974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-11-06 14:08:44.695982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-11-06 14:08:44.696307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-11-06 14:08:44.696314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-11-06 14:08:44.696346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-11-06 14:08:44.696352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-11-06 14:08:44.696682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-11-06 14:08:44.696689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-11-06 14:08:44.696975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-11-06 14:08:44.696982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-11-06 14:08:44.697284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-11-06 14:08:44.697291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-11-06 14:08:44.697482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-11-06 14:08:44.697490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-11-06 14:08:44.697825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-11-06 14:08:44.697832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-11-06 14:08:44.698032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-11-06 14:08:44.698039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-11-06 14:08:44.698373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-11-06 14:08:44.698381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-11-06 14:08:44.698658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-11-06 14:08:44.698665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-11-06 14:08:44.698994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.600 [2024-11-06 14:08:44.699001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.600 qpair failed and we were unable to recover it. 00:25:05.600 [2024-11-06 14:08:44.699284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-11-06 14:08:44.699293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-11-06 14:08:44.699603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-11-06 14:08:44.699610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-11-06 14:08:44.699704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-11-06 14:08:44.699710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-11-06 14:08:44.699884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-11-06 14:08:44.699891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-11-06 14:08:44.700165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-11-06 14:08:44.700172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-11-06 14:08:44.700487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-11-06 14:08:44.700495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-11-06 14:08:44.700817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-11-06 14:08:44.700823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-11-06 14:08:44.701052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-11-06 14:08:44.701059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-11-06 14:08:44.701324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-11-06 14:08:44.701331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-11-06 14:08:44.701718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-11-06 14:08:44.701725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-11-06 14:08:44.702048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-11-06 14:08:44.702056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-11-06 14:08:44.702376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-11-06 14:08:44.702383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-11-06 14:08:44.702670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-11-06 14:08:44.702677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-11-06 14:08:44.702848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-11-06 14:08:44.702855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-11-06 14:08:44.703202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-11-06 14:08:44.703209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-11-06 14:08:44.703401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-11-06 14:08:44.703409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-11-06 14:08:44.703585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-11-06 14:08:44.703592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-11-06 14:08:44.703900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-11-06 14:08:44.703907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-11-06 14:08:44.704242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-11-06 14:08:44.704253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-11-06 14:08:44.704442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-11-06 14:08:44.704449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-11-06 14:08:44.704830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-11-06 14:08:44.704837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-11-06 14:08:44.705161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-11-06 14:08:44.705168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-11-06 14:08:44.705509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-11-06 14:08:44.705516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-11-06 14:08:44.705843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-11-06 14:08:44.705851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-11-06 14:08:44.706043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-11-06 14:08:44.706050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-11-06 14:08:44.706472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-11-06 14:08:44.706480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-11-06 14:08:44.706823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-11-06 14:08:44.706830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-11-06 14:08:44.706999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-11-06 14:08:44.707006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-11-06 14:08:44.707326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-11-06 14:08:44.707334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-11-06 14:08:44.707629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-11-06 14:08:44.707637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-11-06 14:08:44.707946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-11-06 14:08:44.707953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-11-06 14:08:44.708118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-11-06 14:08:44.708125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-11-06 14:08:44.708437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-11-06 14:08:44.708444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-11-06 14:08:44.708797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-11-06 14:08:44.708805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-11-06 14:08:44.708964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-11-06 14:08:44.708971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.601 [2024-11-06 14:08:44.709199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.601 [2024-11-06 14:08:44.709207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.601 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.709389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-11-06 14:08:44.709396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.709748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-11-06 14:08:44.709755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.710076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-11-06 14:08:44.710083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.710432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-11-06 14:08:44.710440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.710626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-11-06 14:08:44.710636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.710955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-11-06 14:08:44.710961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.711122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-11-06 14:08:44.711129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.711362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-11-06 14:08:44.711369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.711520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-11-06 14:08:44.711527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.711897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-11-06 14:08:44.711904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.712223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-11-06 14:08:44.712231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.712632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-11-06 14:08:44.712639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.712940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-11-06 14:08:44.712947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.713301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-11-06 14:08:44.713308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.713635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-11-06 14:08:44.713642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.713989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-11-06 14:08:44.713997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.714352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-11-06 14:08:44.714360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.714519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-11-06 14:08:44.714526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.714861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-11-06 14:08:44.714869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.715030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-11-06 14:08:44.715038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.715369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-11-06 14:08:44.715377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.715727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-11-06 14:08:44.715734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.715951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-11-06 14:08:44.715958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.716087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-11-06 14:08:44.716094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.716269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-11-06 14:08:44.716276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.716588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-11-06 14:08:44.716595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.716916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-11-06 14:08:44.716924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.717229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-11-06 14:08:44.717236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.717414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-11-06 14:08:44.717421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.717798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-11-06 14:08:44.717805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.718107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-11-06 14:08:44.718113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.718416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-11-06 14:08:44.718424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.718711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-11-06 14:08:44.718718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.719014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-11-06 14:08:44.719020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.719337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-11-06 14:08:44.719345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.719656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-11-06 14:08:44.719664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.719983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.602 [2024-11-06 14:08:44.719991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.602 qpair failed and we were unable to recover it. 00:25:05.602 [2024-11-06 14:08:44.720290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-11-06 14:08:44.720298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-11-06 14:08:44.720585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-11-06 14:08:44.720591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-11-06 14:08:44.720951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-11-06 14:08:44.720958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-11-06 14:08:44.721279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-11-06 14:08:44.721287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-11-06 14:08:44.721596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-11-06 14:08:44.721603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-11-06 14:08:44.721924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-11-06 14:08:44.721931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-11-06 14:08:44.722232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-11-06 14:08:44.722239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-11-06 14:08:44.722529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-11-06 14:08:44.722536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-11-06 14:08:44.722692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-11-06 14:08:44.722699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-11-06 14:08:44.723002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-11-06 14:08:44.723009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-11-06 14:08:44.723211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-11-06 14:08:44.723218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-11-06 14:08:44.723415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-11-06 14:08:44.723422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-11-06 14:08:44.723707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-11-06 14:08:44.723713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-11-06 14:08:44.723980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-11-06 14:08:44.723986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-11-06 14:08:44.724151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-11-06 14:08:44.724158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-11-06 14:08:44.724455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-11-06 14:08:44.724463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-11-06 14:08:44.724770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-11-06 14:08:44.724777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-11-06 14:08:44.725081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-11-06 14:08:44.725088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-11-06 14:08:44.725402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-11-06 14:08:44.725410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-11-06 14:08:44.725699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-11-06 14:08:44.725706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-11-06 14:08:44.725884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-11-06 14:08:44.725891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-11-06 14:08:44.726084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-11-06 14:08:44.726092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-11-06 14:08:44.726407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-11-06 14:08:44.726414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-11-06 14:08:44.726759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-11-06 14:08:44.726766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-11-06 14:08:44.727095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-11-06 14:08:44.727103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-11-06 14:08:44.727443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-11-06 14:08:44.727451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-11-06 14:08:44.727752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-11-06 14:08:44.727759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-11-06 14:08:44.728092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-11-06 14:08:44.728098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-11-06 14:08:44.728385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-11-06 14:08:44.728392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-11-06 14:08:44.728671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-11-06 14:08:44.728679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-11-06 14:08:44.729021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-11-06 14:08:44.729028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-11-06 14:08:44.729192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-11-06 14:08:44.729199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-11-06 14:08:44.729350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-11-06 14:08:44.729357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-11-06 14:08:44.729547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-11-06 14:08:44.729555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.603 [2024-11-06 14:08:44.729923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.603 [2024-11-06 14:08:44.729932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.603 qpair failed and we were unable to recover it. 00:25:05.604 [2024-11-06 14:08:44.730074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-11-06 14:08:44.730081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-11-06 14:08:44.730370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-11-06 14:08:44.730377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-11-06 14:08:44.730719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-11-06 14:08:44.730726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-11-06 14:08:44.731034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-11-06 14:08:44.731041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-11-06 14:08:44.731367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-11-06 14:08:44.731374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-11-06 14:08:44.731661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-11-06 14:08:44.731669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-11-06 14:08:44.731734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-11-06 14:08:44.731741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-11-06 14:08:44.732034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-11-06 14:08:44.732042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-11-06 14:08:44.732335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-11-06 14:08:44.732343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-11-06 14:08:44.732654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-11-06 14:08:44.732661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-11-06 14:08:44.732828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-11-06 14:08:44.732835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-11-06 14:08:44.733018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-11-06 14:08:44.733025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-11-06 14:08:44.733249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-11-06 14:08:44.733257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-11-06 14:08:44.733567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-11-06 14:08:44.733575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-11-06 14:08:44.733897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-11-06 14:08:44.733904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-11-06 14:08:44.734229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-11-06 14:08:44.734237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-11-06 14:08:44.734556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-11-06 14:08:44.734564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-11-06 14:08:44.734875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-11-06 14:08:44.734882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-11-06 14:08:44.735070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-11-06 14:08:44.735077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-11-06 14:08:44.735295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-11-06 14:08:44.735303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-11-06 14:08:44.735653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-11-06 14:08:44.735661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-11-06 14:08:44.735984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-11-06 14:08:44.735992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-11-06 14:08:44.736296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-11-06 14:08:44.736304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-11-06 14:08:44.736492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-11-06 14:08:44.736499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-11-06 14:08:44.736684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-11-06 14:08:44.736691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-11-06 14:08:44.736968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-11-06 14:08:44.736975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-11-06 14:08:44.737290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-11-06 14:08:44.737298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-11-06 14:08:44.737608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-11-06 14:08:44.737616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-11-06 14:08:44.737937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-11-06 14:08:44.737944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-11-06 14:08:44.738105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-11-06 14:08:44.738112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-11-06 14:08:44.738394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.604 [2024-11-06 14:08:44.738401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.604 qpair failed and we were unable to recover it. 00:25:05.604 [2024-11-06 14:08:44.738583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-11-06 14:08:44.738590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-11-06 14:08:44.738926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-11-06 14:08:44.738934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-11-06 14:08:44.739105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-11-06 14:08:44.739113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-11-06 14:08:44.739402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-11-06 14:08:44.739410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-11-06 14:08:44.739652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-11-06 14:08:44.739659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-11-06 14:08:44.739967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-11-06 14:08:44.739973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-11-06 14:08:44.740149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-11-06 14:08:44.740155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-11-06 14:08:44.740434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-11-06 14:08:44.740442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-11-06 14:08:44.740807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-11-06 14:08:44.740817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-11-06 14:08:44.740993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-11-06 14:08:44.741001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-11-06 14:08:44.741346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-11-06 14:08:44.741354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-11-06 14:08:44.741539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-11-06 14:08:44.741546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-11-06 14:08:44.741874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-11-06 14:08:44.741881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-11-06 14:08:44.742205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-11-06 14:08:44.742213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-11-06 14:08:44.742434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-11-06 14:08:44.742441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-11-06 14:08:44.742739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-11-06 14:08:44.742746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-11-06 14:08:44.743075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-11-06 14:08:44.743082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-11-06 14:08:44.743380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-11-06 14:08:44.743387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-11-06 14:08:44.743707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-11-06 14:08:44.743714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-11-06 14:08:44.743850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-11-06 14:08:44.743856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-11-06 14:08:44.744113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-11-06 14:08:44.744120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-11-06 14:08:44.744476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-11-06 14:08:44.744484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-11-06 14:08:44.744821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-11-06 14:08:44.744828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-11-06 14:08:44.745203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-11-06 14:08:44.745210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-11-06 14:08:44.745563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-11-06 14:08:44.745571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-11-06 14:08:44.745750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-11-06 14:08:44.745757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-11-06 14:08:44.746078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-11-06 14:08:44.746086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-11-06 14:08:44.746375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-11-06 14:08:44.746383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-11-06 14:08:44.746683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-11-06 14:08:44.746691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-11-06 14:08:44.746862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-11-06 14:08:44.746869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-11-06 14:08:44.747179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-11-06 14:08:44.747186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-11-06 14:08:44.747367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-11-06 14:08:44.747374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-11-06 14:08:44.747647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-11-06 14:08:44.747654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-11-06 14:08:44.748030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-11-06 14:08:44.748037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-11-06 14:08:44.748362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-11-06 14:08:44.748370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.605 qpair failed and we were unable to recover it. 00:25:05.605 [2024-11-06 14:08:44.748688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.605 [2024-11-06 14:08:44.748695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-11-06 14:08:44.749011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-11-06 14:08:44.749018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-11-06 14:08:44.749186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-11-06 14:08:44.749193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-11-06 14:08:44.749498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-11-06 14:08:44.749505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-11-06 14:08:44.749704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-11-06 14:08:44.749711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-11-06 14:08:44.749902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-11-06 14:08:44.749910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-11-06 14:08:44.750085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-11-06 14:08:44.750092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-11-06 14:08:44.750389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-11-06 14:08:44.750396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-11-06 14:08:44.750432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-11-06 14:08:44.750439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-11-06 14:08:44.750751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-11-06 14:08:44.750758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-11-06 14:08:44.751053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-11-06 14:08:44.751060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-11-06 14:08:44.751351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-11-06 14:08:44.751358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-11-06 14:08:44.751638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-11-06 14:08:44.751646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-11-06 14:08:44.751961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-11-06 14:08:44.751970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-11-06 14:08:44.752136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-11-06 14:08:44.752143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-11-06 14:08:44.752541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-11-06 14:08:44.752548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-11-06 14:08:44.752689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-11-06 14:08:44.752697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-11-06 14:08:44.752984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-11-06 14:08:44.752993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-11-06 14:08:44.753308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-11-06 14:08:44.753316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-11-06 14:08:44.753604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-11-06 14:08:44.753612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-11-06 14:08:44.753928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-11-06 14:08:44.753935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-11-06 14:08:44.754251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-11-06 14:08:44.754259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-11-06 14:08:44.754588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-11-06 14:08:44.754596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-11-06 14:08:44.754907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-11-06 14:08:44.754915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-11-06 14:08:44.755070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-11-06 14:08:44.755078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-11-06 14:08:44.755310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-11-06 14:08:44.755318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-11-06 14:08:44.755657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-11-06 14:08:44.755664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-11-06 14:08:44.756027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-11-06 14:08:44.756035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-11-06 14:08:44.756359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-11-06 14:08:44.756367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-11-06 14:08:44.756679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-11-06 14:08:44.756687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-11-06 14:08:44.756977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-11-06 14:08:44.756984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-11-06 14:08:44.757165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-11-06 14:08:44.757172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-11-06 14:08:44.757463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-11-06 14:08:44.757471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-11-06 14:08:44.757645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-11-06 14:08:44.757653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-11-06 14:08:44.757840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.606 [2024-11-06 14:08:44.757848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.606 qpair failed and we were unable to recover it. 00:25:05.606 [2024-11-06 14:08:44.757887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.757893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-11-06 14:08:44.757988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.757996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-11-06 14:08:44.758304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.758313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-11-06 14:08:44.758540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.758548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-11-06 14:08:44.758882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.758889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-11-06 14:08:44.759080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.759088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-11-06 14:08:44.759277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.759285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-11-06 14:08:44.759610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.759617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-11-06 14:08:44.759918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.759926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-11-06 14:08:44.760231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.760239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-11-06 14:08:44.760411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.760419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-11-06 14:08:44.760689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.760697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-11-06 14:08:44.761028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.761037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-11-06 14:08:44.761370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.761378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-11-06 14:08:44.761717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.761725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-11-06 14:08:44.762021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.762029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-11-06 14:08:44.762227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.762234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-11-06 14:08:44.762467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.762475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-11-06 14:08:44.762689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.762699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-11-06 14:08:44.762980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.762987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-11-06 14:08:44.763278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.763285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-11-06 14:08:44.763459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.763466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-11-06 14:08:44.763499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.763506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-11-06 14:08:44.763856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.763863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-11-06 14:08:44.764043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.764051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-11-06 14:08:44.764234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.764242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-11-06 14:08:44.764606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.764612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-11-06 14:08:44.764902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.764908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-11-06 14:08:44.765097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.765103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-11-06 14:08:44.765411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.765418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-11-06 14:08:44.765759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.765767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-11-06 14:08:44.765925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.765933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-11-06 14:08:44.766309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.766316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-11-06 14:08:44.766593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.766600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-11-06 14:08:44.766928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.766935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-11-06 14:08:44.767262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.767269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-11-06 14:08:44.767471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.767479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.607 [2024-11-06 14:08:44.767645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.607 [2024-11-06 14:08:44.767652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.607 qpair failed and we were unable to recover it. 00:25:05.608 [2024-11-06 14:08:44.767841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-11-06 14:08:44.767847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-11-06 14:08:44.768185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-11-06 14:08:44.768191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-11-06 14:08:44.768518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-11-06 14:08:44.768525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-11-06 14:08:44.768832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-11-06 14:08:44.768838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-11-06 14:08:44.769150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-11-06 14:08:44.769158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-11-06 14:08:44.769340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-11-06 14:08:44.769347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-11-06 14:08:44.769565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-11-06 14:08:44.769572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-11-06 14:08:44.769608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-11-06 14:08:44.769615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-11-06 14:08:44.769718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-11-06 14:08:44.769725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-11-06 14:08:44.770021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-11-06 14:08:44.770028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-11-06 14:08:44.770248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-11-06 14:08:44.770255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-11-06 14:08:44.770565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-11-06 14:08:44.770572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-11-06 14:08:44.770883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-11-06 14:08:44.770890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-11-06 14:08:44.771106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-11-06 14:08:44.771113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-11-06 14:08:44.771432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-11-06 14:08:44.771439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-11-06 14:08:44.771772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-11-06 14:08:44.771779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-11-06 14:08:44.772072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-11-06 14:08:44.772079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-11-06 14:08:44.772237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-11-06 14:08:44.772247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-11-06 14:08:44.772573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-11-06 14:08:44.772580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-11-06 14:08:44.772762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-11-06 14:08:44.772770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-11-06 14:08:44.773086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-11-06 14:08:44.773094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-11-06 14:08:44.773249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-11-06 14:08:44.773256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-11-06 14:08:44.773634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-11-06 14:08:44.773642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-11-06 14:08:44.773937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-11-06 14:08:44.773944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-11-06 14:08:44.774239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-11-06 14:08:44.774248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-11-06 14:08:44.774556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-11-06 14:08:44.774563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-11-06 14:08:44.774758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-11-06 14:08:44.774765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-11-06 14:08:44.775105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-11-06 14:08:44.775112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-11-06 14:08:44.775297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-11-06 14:08:44.775304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-11-06 14:08:44.775549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.608 [2024-11-06 14:08:44.775556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.608 qpair failed and we were unable to recover it. 00:25:05.608 [2024-11-06 14:08:44.775920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-11-06 14:08:44.775927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-11-06 14:08:44.776253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-11-06 14:08:44.776261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-11-06 14:08:44.776424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-11-06 14:08:44.776431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-11-06 14:08:44.776801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-11-06 14:08:44.776808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-11-06 14:08:44.777097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-11-06 14:08:44.777104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-11-06 14:08:44.777415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-11-06 14:08:44.777422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-11-06 14:08:44.777765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-11-06 14:08:44.777772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-11-06 14:08:44.778171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-11-06 14:08:44.778178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-11-06 14:08:44.778348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-11-06 14:08:44.778355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-11-06 14:08:44.778536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-11-06 14:08:44.778543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-11-06 14:08:44.778876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-11-06 14:08:44.778883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-11-06 14:08:44.779217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-11-06 14:08:44.779224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-11-06 14:08:44.779592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-11-06 14:08:44.779599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-11-06 14:08:44.779761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-11-06 14:08:44.779769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-11-06 14:08:44.780078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-11-06 14:08:44.780086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-11-06 14:08:44.780401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-11-06 14:08:44.780409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-11-06 14:08:44.780688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-11-06 14:08:44.780697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-11-06 14:08:44.780862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-11-06 14:08:44.780869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-11-06 14:08:44.781192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-11-06 14:08:44.781199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-11-06 14:08:44.781516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-11-06 14:08:44.781523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-11-06 14:08:44.781814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-11-06 14:08:44.781821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-11-06 14:08:44.782008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-11-06 14:08:44.782016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-11-06 14:08:44.782317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-11-06 14:08:44.782324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-11-06 14:08:44.782690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-11-06 14:08:44.782697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-11-06 14:08:44.783016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-11-06 14:08:44.783023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-11-06 14:08:44.783191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-11-06 14:08:44.783198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-11-06 14:08:44.783371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-11-06 14:08:44.783378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-11-06 14:08:44.783548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-11-06 14:08:44.783554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-11-06 14:08:44.783900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-11-06 14:08:44.783906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-11-06 14:08:44.784086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-11-06 14:08:44.784094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-11-06 14:08:44.784349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-11-06 14:08:44.784358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-11-06 14:08:44.784531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-11-06 14:08:44.784540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-11-06 14:08:44.784708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.609 [2024-11-06 14:08:44.784714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.609 qpair failed and we were unable to recover it. 00:25:05.609 [2024-11-06 14:08:44.785029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-11-06 14:08:44.785035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-11-06 14:08:44.785414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-11-06 14:08:44.785421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-11-06 14:08:44.785736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-11-06 14:08:44.785743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-11-06 14:08:44.786040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-11-06 14:08:44.786047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-11-06 14:08:44.786222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-11-06 14:08:44.786229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-11-06 14:08:44.786272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-11-06 14:08:44.786281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-11-06 14:08:44.786598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-11-06 14:08:44.786605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-11-06 14:08:44.786912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-11-06 14:08:44.786919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-11-06 14:08:44.787272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-11-06 14:08:44.787279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-11-06 14:08:44.787558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-11-06 14:08:44.787565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-11-06 14:08:44.787742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-11-06 14:08:44.787749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-11-06 14:08:44.787782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-11-06 14:08:44.787789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-11-06 14:08:44.788077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-11-06 14:08:44.788084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-11-06 14:08:44.788378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-11-06 14:08:44.788385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-11-06 14:08:44.788575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-11-06 14:08:44.788583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-11-06 14:08:44.788941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-11-06 14:08:44.788948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-11-06 14:08:44.789111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-11-06 14:08:44.789118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-11-06 14:08:44.789288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-11-06 14:08:44.789295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-11-06 14:08:44.789592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-11-06 14:08:44.789599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-11-06 14:08:44.789859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-11-06 14:08:44.789866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-11-06 14:08:44.790039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-11-06 14:08:44.790046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-11-06 14:08:44.790303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-11-06 14:08:44.790310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-11-06 14:08:44.790608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-11-06 14:08:44.790615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-11-06 14:08:44.790916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-11-06 14:08:44.790923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-11-06 14:08:44.791270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-11-06 14:08:44.791277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-11-06 14:08:44.791577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-11-06 14:08:44.791585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-11-06 14:08:44.791859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-11-06 14:08:44.791866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-11-06 14:08:44.792066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-11-06 14:08:44.792072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-11-06 14:08:44.792388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-11-06 14:08:44.792395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-11-06 14:08:44.792433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-11-06 14:08:44.792439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-11-06 14:08:44.792719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-11-06 14:08:44.792726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-11-06 14:08:44.793078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-11-06 14:08:44.793085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-11-06 14:08:44.793301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-11-06 14:08:44.793308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-11-06 14:08:44.793634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-11-06 14:08:44.793641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-11-06 14:08:44.793926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-11-06 14:08:44.793932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.610 qpair failed and we were unable to recover it. 00:25:05.610 [2024-11-06 14:08:44.794084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.610 [2024-11-06 14:08:44.794091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-11-06 14:08:44.794277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-11-06 14:08:44.794284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-11-06 14:08:44.794604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-11-06 14:08:44.794612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-11-06 14:08:44.794921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-11-06 14:08:44.794928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-11-06 14:08:44.795340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-11-06 14:08:44.795347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-11-06 14:08:44.795568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-11-06 14:08:44.795575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-11-06 14:08:44.795613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-11-06 14:08:44.795620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-11-06 14:08:44.795795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-11-06 14:08:44.795802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-11-06 14:08:44.795951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-11-06 14:08:44.795958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-11-06 14:08:44.796288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-11-06 14:08:44.796295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-11-06 14:08:44.796609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-11-06 14:08:44.796615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-11-06 14:08:44.796807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-11-06 14:08:44.796814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-11-06 14:08:44.797164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-11-06 14:08:44.797171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-11-06 14:08:44.797458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-11-06 14:08:44.797466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-11-06 14:08:44.797635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-11-06 14:08:44.797643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-11-06 14:08:44.798025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-11-06 14:08:44.798032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-11-06 14:08:44.798324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-11-06 14:08:44.798332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-11-06 14:08:44.798532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-11-06 14:08:44.798539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-11-06 14:08:44.798893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-11-06 14:08:44.798900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-11-06 14:08:44.799200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-11-06 14:08:44.799207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-11-06 14:08:44.799405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-11-06 14:08:44.799412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-11-06 14:08:44.799597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-11-06 14:08:44.799604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-11-06 14:08:44.799812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-11-06 14:08:44.799818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-11-06 14:08:44.800138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-11-06 14:08:44.800144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-11-06 14:08:44.800434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-11-06 14:08:44.800441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-11-06 14:08:44.800761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-11-06 14:08:44.800768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-11-06 14:08:44.800953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-11-06 14:08:44.800959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-11-06 14:08:44.801186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-11-06 14:08:44.801194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-11-06 14:08:44.801480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-11-06 14:08:44.801488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-11-06 14:08:44.801825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-11-06 14:08:44.801832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-11-06 14:08:44.802044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-11-06 14:08:44.802051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-11-06 14:08:44.802352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-11-06 14:08:44.802359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-11-06 14:08:44.802651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-11-06 14:08:44.802657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-11-06 14:08:44.802850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-11-06 14:08:44.802858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-11-06 14:08:44.803155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-11-06 14:08:44.803162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.611 [2024-11-06 14:08:44.803468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.611 [2024-11-06 14:08:44.803475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.611 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.803883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.803890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.804064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.804071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.804422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.804429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.804714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.804722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.805020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.805027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.805186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.805193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.805569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.805578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.805860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.805867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.806016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.806023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.806308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.806315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.806630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.806637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.807043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.807050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.807401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.807409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.807785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.807793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.808080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.808087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.808410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.808418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.808703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.808710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.809018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.809025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.809423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.809430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.809599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.809607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.809869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.809876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.810205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.810212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.810493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.810500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.810849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.810856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.811173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.811180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.811337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.811344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.811651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.811659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.811999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.812006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.812170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.812177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.812218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.812226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.812513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.812520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.812843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.812850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.813024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.813030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.813341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.813349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.813662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.813670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.813958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.813965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.814263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.814270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.612 qpair failed and we were unable to recover it. 00:25:05.612 [2024-11-06 14:08:44.814585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.612 [2024-11-06 14:08:44.814592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-11-06 14:08:44.814918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-11-06 14:08:44.814925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-11-06 14:08:44.815113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-11-06 14:08:44.815121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-11-06 14:08:44.815261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-11-06 14:08:44.815268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-11-06 14:08:44.815576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-11-06 14:08:44.815583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-11-06 14:08:44.815768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-11-06 14:08:44.815776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-11-06 14:08:44.816095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-11-06 14:08:44.816101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-11-06 14:08:44.816269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-11-06 14:08:44.816276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-11-06 14:08:44.816647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-11-06 14:08:44.816655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-11-06 14:08:44.816864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-11-06 14:08:44.816873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-11-06 14:08:44.817161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-11-06 14:08:44.817168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-11-06 14:08:44.817381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-11-06 14:08:44.817388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-11-06 14:08:44.817772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-11-06 14:08:44.817778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-11-06 14:08:44.817953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-11-06 14:08:44.817960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-11-06 14:08:44.818310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-11-06 14:08:44.818317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-11-06 14:08:44.818611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-11-06 14:08:44.818617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-11-06 14:08:44.818930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-11-06 14:08:44.818937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-11-06 14:08:44.819331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-11-06 14:08:44.819338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-11-06 14:08:44.819510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-11-06 14:08:44.819520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-11-06 14:08:44.819844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-11-06 14:08:44.819851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-11-06 14:08:44.820034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-11-06 14:08:44.820042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-11-06 14:08:44.820203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-11-06 14:08:44.820210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-11-06 14:08:44.820404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-11-06 14:08:44.820411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-11-06 14:08:44.820706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-11-06 14:08:44.820713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-11-06 14:08:44.820924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-11-06 14:08:44.820931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-11-06 14:08:44.820968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-11-06 14:08:44.820974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-11-06 14:08:44.821264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-11-06 14:08:44.821271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-11-06 14:08:44.821595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-11-06 14:08:44.821603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-11-06 14:08:44.821924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-11-06 14:08:44.821930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-11-06 14:08:44.822229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-11-06 14:08:44.822236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.613 [2024-11-06 14:08:44.822404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.613 [2024-11-06 14:08:44.822411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.613 qpair failed and we were unable to recover it. 00:25:05.614 [2024-11-06 14:08:44.822634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-11-06 14:08:44.822641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-11-06 14:08:44.822960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-11-06 14:08:44.822968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-11-06 14:08:44.823129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-11-06 14:08:44.823135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-11-06 14:08:44.823528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-11-06 14:08:44.823535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-11-06 14:08:44.823672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-11-06 14:08:44.823678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-11-06 14:08:44.824062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-11-06 14:08:44.824070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-11-06 14:08:44.824242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-11-06 14:08:44.824253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-11-06 14:08:44.824426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-11-06 14:08:44.824433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-11-06 14:08:44.824647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-11-06 14:08:44.824655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-11-06 14:08:44.824968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-11-06 14:08:44.824975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-11-06 14:08:44.825138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-11-06 14:08:44.825145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-11-06 14:08:44.825457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-11-06 14:08:44.825464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-11-06 14:08:44.825651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-11-06 14:08:44.825658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-11-06 14:08:44.826006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-11-06 14:08:44.826013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-11-06 14:08:44.826191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-11-06 14:08:44.826199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-11-06 14:08:44.826486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-11-06 14:08:44.826494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-11-06 14:08:44.826654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-11-06 14:08:44.826661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-11-06 14:08:44.827033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-11-06 14:08:44.827040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-11-06 14:08:44.827342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-11-06 14:08:44.827351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-11-06 14:08:44.827536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-11-06 14:08:44.827543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-11-06 14:08:44.827843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-11-06 14:08:44.827850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-11-06 14:08:44.828092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-11-06 14:08:44.828099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-11-06 14:08:44.828451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-11-06 14:08:44.828458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-11-06 14:08:44.828781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-11-06 14:08:44.828787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-11-06 14:08:44.829118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-11-06 14:08:44.829125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-11-06 14:08:44.829296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-11-06 14:08:44.829303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-11-06 14:08:44.829513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-11-06 14:08:44.829519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-11-06 14:08:44.829823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-11-06 14:08:44.829830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-11-06 14:08:44.830138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-11-06 14:08:44.830145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-11-06 14:08:44.830457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-11-06 14:08:44.830465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-11-06 14:08:44.830792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-11-06 14:08:44.830800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-11-06 14:08:44.831099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-11-06 14:08:44.831107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-11-06 14:08:44.831280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-11-06 14:08:44.831287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.614 [2024-11-06 14:08:44.831622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.614 [2024-11-06 14:08:44.831629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.614 qpair failed and we were unable to recover it. 00:25:05.615 [2024-11-06 14:08:44.831931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-11-06 14:08:44.831938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-11-06 14:08:44.832104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-11-06 14:08:44.832111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-11-06 14:08:44.832421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-11-06 14:08:44.832429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-11-06 14:08:44.832652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-11-06 14:08:44.832658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-11-06 14:08:44.833012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-11-06 14:08:44.833019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-11-06 14:08:44.833406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-11-06 14:08:44.833413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-11-06 14:08:44.833736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-11-06 14:08:44.833743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-11-06 14:08:44.834107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-11-06 14:08:44.834114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-11-06 14:08:44.834438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-11-06 14:08:44.834445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-11-06 14:08:44.834750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-11-06 14:08:44.834757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-11-06 14:08:44.835094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-11-06 14:08:44.835102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-11-06 14:08:44.835408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-11-06 14:08:44.835416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-11-06 14:08:44.835739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-11-06 14:08:44.835745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-11-06 14:08:44.836074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-11-06 14:08:44.836081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-11-06 14:08:44.836375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-11-06 14:08:44.836382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-11-06 14:08:44.836685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-11-06 14:08:44.836693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-11-06 14:08:44.836886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-11-06 14:08:44.836893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-11-06 14:08:44.837129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-11-06 14:08:44.837135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-11-06 14:08:44.837440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-11-06 14:08:44.837448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-11-06 14:08:44.837605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-11-06 14:08:44.837612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-11-06 14:08:44.837775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-11-06 14:08:44.837781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-11-06 14:08:44.838076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-11-06 14:08:44.838084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-11-06 14:08:44.838362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-11-06 14:08:44.838369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-11-06 14:08:44.838759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-11-06 14:08:44.838766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-11-06 14:08:44.838817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-11-06 14:08:44.838826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-11-06 14:08:44.838996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-11-06 14:08:44.839003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-11-06 14:08:44.839197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-11-06 14:08:44.839204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-11-06 14:08:44.839538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-11-06 14:08:44.839546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-11-06 14:08:44.839884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-11-06 14:08:44.839891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-11-06 14:08:44.840067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-11-06 14:08:44.840074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-11-06 14:08:44.840406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-11-06 14:08:44.840413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-11-06 14:08:44.840598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.615 [2024-11-06 14:08:44.840606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.615 qpair failed and we were unable to recover it. 00:25:05.615 [2024-11-06 14:08:44.840933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-11-06 14:08:44.840940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-11-06 14:08:44.841231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-11-06 14:08:44.841238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-11-06 14:08:44.841598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-11-06 14:08:44.841605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-11-06 14:08:44.841903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-11-06 14:08:44.841910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-11-06 14:08:44.842232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-11-06 14:08:44.842239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-11-06 14:08:44.842562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-11-06 14:08:44.842568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-11-06 14:08:44.842873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-11-06 14:08:44.842880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-11-06 14:08:44.843172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-11-06 14:08:44.843178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-11-06 14:08:44.843502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-11-06 14:08:44.843509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-11-06 14:08:44.843847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-11-06 14:08:44.843855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-11-06 14:08:44.844010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-11-06 14:08:44.844017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-11-06 14:08:44.844330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-11-06 14:08:44.844338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-11-06 14:08:44.844528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-11-06 14:08:44.844536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-11-06 14:08:44.844849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-11-06 14:08:44.844856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-11-06 14:08:44.845074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-11-06 14:08:44.845081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-11-06 14:08:44.845405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-11-06 14:08:44.845412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-11-06 14:08:44.845723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-11-06 14:08:44.845730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-11-06 14:08:44.846042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-11-06 14:08:44.846049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-11-06 14:08:44.846373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-11-06 14:08:44.846380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-11-06 14:08:44.846671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-11-06 14:08:44.846678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-11-06 14:08:44.847001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-11-06 14:08:44.847008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-11-06 14:08:44.847315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-11-06 14:08:44.847323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-11-06 14:08:44.847521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-11-06 14:08:44.847528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-11-06 14:08:44.847779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-11-06 14:08:44.847786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-11-06 14:08:44.848189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-11-06 14:08:44.848196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-11-06 14:08:44.848371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-11-06 14:08:44.848381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-11-06 14:08:44.848676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-11-06 14:08:44.848683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-11-06 14:08:44.848987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-11-06 14:08:44.848995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-11-06 14:08:44.849313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-11-06 14:08:44.849320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-11-06 14:08:44.849510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-11-06 14:08:44.849517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-11-06 14:08:44.849754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-11-06 14:08:44.849761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-11-06 14:08:44.850082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-11-06 14:08:44.850088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-11-06 14:08:44.850418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-11-06 14:08:44.850426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-11-06 14:08:44.850590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-11-06 14:08:44.850597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-11-06 14:08:44.851040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-11-06 14:08:44.851048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-11-06 14:08:44.851261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-11-06 14:08:44.851268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-11-06 14:08:44.851511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.616 [2024-11-06 14:08:44.851518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.616 qpair failed and we were unable to recover it. 00:25:05.616 [2024-11-06 14:08:44.851793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.617 [2024-11-06 14:08:44.851800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.617 qpair failed and we were unable to recover it. 00:25:05.892 [2024-11-06 14:08:44.852064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.892 [2024-11-06 14:08:44.852073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.892 qpair failed and we were unable to recover it. 00:25:05.892 [2024-11-06 14:08:44.852374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.892 [2024-11-06 14:08:44.852382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.892 qpair failed and we were unable to recover it. 00:25:05.892 [2024-11-06 14:08:44.852606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.892 [2024-11-06 14:08:44.852612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.892 qpair failed and we were unable to recover it. 00:25:05.892 [2024-11-06 14:08:44.852787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.892 [2024-11-06 14:08:44.852793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.892 qpair failed and we were unable to recover it. 00:25:05.892 [2024-11-06 14:08:44.852949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.892 [2024-11-06 14:08:44.852955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.892 qpair failed and we were unable to recover it. 00:25:05.892 [2024-11-06 14:08:44.853285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.892 [2024-11-06 14:08:44.853293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.892 qpair failed and we were unable to recover it. 00:25:05.892 [2024-11-06 14:08:44.853683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.892 [2024-11-06 14:08:44.853691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.892 qpair failed and we were unable to recover it. 00:25:05.892 [2024-11-06 14:08:44.853877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.892 [2024-11-06 14:08:44.853884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.892 qpair failed and we were unable to recover it. 00:25:05.892 [2024-11-06 14:08:44.854128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.892 [2024-11-06 14:08:44.854136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.892 qpair failed and we were unable to recover it. 00:25:05.892 [2024-11-06 14:08:44.854338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.892 [2024-11-06 14:08:44.854345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.892 qpair failed and we were unable to recover it. 00:25:05.892 [2024-11-06 14:08:44.854549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.892 [2024-11-06 14:08:44.854556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.892 qpair failed and we were unable to recover it. 00:25:05.892 [2024-11-06 14:08:44.854591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.892 [2024-11-06 14:08:44.854598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.892 qpair failed and we were unable to recover it. 00:25:05.892 [2024-11-06 14:08:44.854911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.892 [2024-11-06 14:08:44.854917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.892 qpair failed and we were unable to recover it. 00:25:05.892 [2024-11-06 14:08:44.855258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.892 [2024-11-06 14:08:44.855265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.892 qpair failed and we were unable to recover it. 00:25:05.892 [2024-11-06 14:08:44.855300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.892 [2024-11-06 14:08:44.855306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.892 qpair failed and we were unable to recover it. 00:25:05.892 [2024-11-06 14:08:44.855649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.892 [2024-11-06 14:08:44.855656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.892 qpair failed and we were unable to recover it. 00:25:05.892 [2024-11-06 14:08:44.855987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.892 [2024-11-06 14:08:44.855994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.892 qpair failed and we were unable to recover it. 00:25:05.892 [2024-11-06 14:08:44.856413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.892 [2024-11-06 14:08:44.856421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.892 qpair failed and we were unable to recover it. 00:25:05.892 [2024-11-06 14:08:44.856735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.892 [2024-11-06 14:08:44.856742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.892 qpair failed and we were unable to recover it. 00:25:05.892 [2024-11-06 14:08:44.856869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.892 [2024-11-06 14:08:44.856875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.892 qpair failed and we were unable to recover it. 00:25:05.892 [2024-11-06 14:08:44.857155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.892 [2024-11-06 14:08:44.857161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.892 qpair failed and we were unable to recover it. 00:25:05.892 [2024-11-06 14:08:44.857456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.857463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.857639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.857647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.857853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.857861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.858029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.858037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.858354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.858361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.858663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.858670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.858870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.858877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.859220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.859228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.859540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.859547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.859862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.859870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.860179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.860186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.860381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.860389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.860609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.860616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.860914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.860922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.861174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.861180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.861500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.861507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.861839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.861847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.862033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.862041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.862090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.862097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.862251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.862258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.862596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.862602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.862901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.862908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.863104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.863111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.863415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.863423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.863753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.863759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.863972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.863980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.864404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.864411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.864778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.864785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.865081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.865088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.865412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.865419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.865741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.865748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.865926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.865933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.866230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.866237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.866592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.866600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.866761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.866769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.867082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.867089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.867446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.867454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.867614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.893 [2024-11-06 14:08:44.867621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.893 qpair failed and we were unable to recover it. 00:25:05.893 [2024-11-06 14:08:44.867971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.867978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.868301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.868309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.868612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.868621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.868913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.868919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.869221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.869228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.869587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.869595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.869847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.869854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.870183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.870190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.870544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.870552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.870887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.870894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.871068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.871075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.871298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.871305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.871582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.871590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.871909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.871915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.872204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.872211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.872608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.872615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.872804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.872812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.872848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.872856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.873157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.873164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.873482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.873489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.873810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.873817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.874110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.874117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.874284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.874291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.874475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.874482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.874783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.874791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.874979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.874987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.875181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.875188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.875559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.875566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.875868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.875874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.876291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.876299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.876616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.876623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.876659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.876665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.876838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.876845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.877040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.877047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.877228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.877235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.877578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.877586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.877913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.877920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.878085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.894 [2024-11-06 14:08:44.878093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.894 qpair failed and we were unable to recover it. 00:25:05.894 [2024-11-06 14:08:44.878261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.878269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.878508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.878515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.878881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.878887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.879055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.879062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.879328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.879339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.879504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.879511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.879668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.879675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.879987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.879994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.880125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.880132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.880493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.880500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.880854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.880861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.881134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.881141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.881445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.881453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.881770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.881778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.882066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.882073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.882204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.882211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.882551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.882647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.883072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.883112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.883664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.883753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.884191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.884229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe314000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.884307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.884315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.884670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.884677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.884713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.884719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.884993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.885001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.885306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.885314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.885466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.885473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.885754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.885761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.886091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.886098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.886437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.886445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.886793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.886800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.887074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.887081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.887409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.887417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.887811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.887818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.888130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.888137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.888330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.888338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.888530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.888536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.888913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.888920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.889107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.889114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.895 qpair failed and we were unable to recover it. 00:25:05.895 [2024-11-06 14:08:44.889452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.895 [2024-11-06 14:08:44.889460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.896 qpair failed and we were unable to recover it. 00:25:05.896 [2024-11-06 14:08:44.889770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.896 [2024-11-06 14:08:44.889778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.896 qpair failed and we were unable to recover it. 00:25:05.896 [2024-11-06 14:08:44.890082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.896 [2024-11-06 14:08:44.890090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.896 qpair failed and we were unable to recover it. 00:25:05.896 [2024-11-06 14:08:44.890467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.896 [2024-11-06 14:08:44.890474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.896 qpair failed and we were unable to recover it. 00:25:05.896 [2024-11-06 14:08:44.890768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.896 [2024-11-06 14:08:44.890775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.896 qpair failed and we were unable to recover it. 00:25:05.896 [2024-11-06 14:08:44.890946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.896 [2024-11-06 14:08:44.890953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.896 qpair failed and we were unable to recover it. 00:25:05.896 [2024-11-06 14:08:44.891218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.896 [2024-11-06 14:08:44.891227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.896 qpair failed and we were unable to recover it. 00:25:05.896 [2024-11-06 14:08:44.891527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.896 [2024-11-06 14:08:44.891535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.896 qpair failed and we were unable to recover it. 00:25:05.896 [2024-11-06 14:08:44.891750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.896 [2024-11-06 14:08:44.891757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.896 qpair failed and we were unable to recover it. 00:25:05.896 [2024-11-06 14:08:44.892077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.896 [2024-11-06 14:08:44.892084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.896 qpair failed and we were unable to recover it. 00:25:05.896 [2024-11-06 14:08:44.892405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.896 [2024-11-06 14:08:44.892413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.896 qpair failed and we were unable to recover it. 00:25:05.896 [2024-11-06 14:08:44.892598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.896 [2024-11-06 14:08:44.892606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.896 qpair failed and we were unable to recover it. 00:25:05.896 [2024-11-06 14:08:44.892935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.896 [2024-11-06 14:08:44.892942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.896 qpair failed and we were unable to recover it. 00:25:05.896 [2024-11-06 14:08:44.893288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.896 [2024-11-06 14:08:44.893295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.896 qpair failed and we were unable to recover it. 00:25:05.896 [2024-11-06 14:08:44.893447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.896 [2024-11-06 14:08:44.893454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.896 qpair failed and we were unable to recover it. 00:25:05.896 [2024-11-06 14:08:44.893838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.896 [2024-11-06 14:08:44.893893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.896 qpair failed and we were unable to recover it. 00:25:05.896 [2024-11-06 14:08:44.894117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.896 [2024-11-06 14:08:44.894137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.896 qpair failed and we were unable to recover it. 00:25:05.896 [2024-11-06 14:08:44.894473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.896 [2024-11-06 14:08:44.894493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.896 qpair failed and we were unable to recover it. 00:25:05.896 [2024-11-06 14:08:44.894721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.896 [2024-11-06 14:08:44.894738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.896 qpair failed and we were unable to recover it. 00:25:05.896 [2024-11-06 14:08:44.894924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.896 [2024-11-06 14:08:44.894940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.896 qpair failed and we were unable to recover it. 00:25:05.896 [2024-11-06 14:08:44.895173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.896 [2024-11-06 14:08:44.895190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.896 qpair failed and we were unable to recover it. 00:25:05.896 [2024-11-06 14:08:44.895406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.896 [2024-11-06 14:08:44.895423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.896 qpair failed and we were unable to recover it. 00:25:05.896 [2024-11-06 14:08:44.895484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.896 [2024-11-06 14:08:44.895500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.896 qpair failed and we were unable to recover it. 00:25:05.896 [2024-11-06 14:08:44.895701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.896 [2024-11-06 14:08:44.895717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.896 qpair failed and we were unable to recover it. 00:25:05.896 [2024-11-06 14:08:44.895908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.896 [2024-11-06 14:08:44.895924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.896 qpair failed and we were unable to recover it. 00:25:05.896 [2024-11-06 14:08:44.896287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.896 [2024-11-06 14:08:44.896304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.896 qpair failed and we were unable to recover it. 00:25:05.896 [2024-11-06 14:08:44.896626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.896 [2024-11-06 14:08:44.896642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.896 qpair failed and we were unable to recover it. 00:25:05.896 [2024-11-06 14:08:44.896875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.896 [2024-11-06 14:08:44.896892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.896 qpair failed and we were unable to recover it. 00:25:05.896 [2024-11-06 14:08:44.897093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.896 [2024-11-06 14:08:44.897109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.896 qpair failed and we were unable to recover it. 00:25:05.896 [2024-11-06 14:08:44.897472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.896 [2024-11-06 14:08:44.897489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.896 qpair failed and we were unable to recover it. 00:25:05.896 [2024-11-06 14:08:44.897681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.896 [2024-11-06 14:08:44.897697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.896 qpair failed and we were unable to recover it. 00:25:05.896 [2024-11-06 14:08:44.897891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.896 [2024-11-06 14:08:44.897907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.896 qpair failed and we were unable to recover it. 00:25:05.896 [2024-11-06 14:08:44.898118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.896 [2024-11-06 14:08:44.898134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.896 qpair failed and we were unable to recover it. 00:25:05.896 [2024-11-06 14:08:44.898470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.896 [2024-11-06 14:08:44.898492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.896 qpair failed and we were unable to recover it. 00:25:05.896 [2024-11-06 14:08:44.898845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.897 [2024-11-06 14:08:44.898860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.897 qpair failed and we were unable to recover it. 00:25:05.897 [2024-11-06 14:08:44.899197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.897 [2024-11-06 14:08:44.899213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.897 qpair failed and we were unable to recover it. 00:25:05.897 [2024-11-06 14:08:44.899564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.897 [2024-11-06 14:08:44.899581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.897 qpair failed and we were unable to recover it. 00:25:05.897 [2024-11-06 14:08:44.899903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.897 [2024-11-06 14:08:44.899920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.897 qpair failed and we were unable to recover it. 00:25:05.897 [2024-11-06 14:08:44.900173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.897 [2024-11-06 14:08:44.900189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.897 qpair failed and we were unable to recover it. 00:25:05.897 [2024-11-06 14:08:44.900509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.897 [2024-11-06 14:08:44.900520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.897 qpair failed and we were unable to recover it. 00:25:05.897 [2024-11-06 14:08:44.900829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.897 [2024-11-06 14:08:44.900839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.897 qpair failed and we were unable to recover it. 00:25:05.897 [2024-11-06 14:08:44.901131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.897 [2024-11-06 14:08:44.901141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.897 qpair failed and we were unable to recover it. 00:25:05.897 [2024-11-06 14:08:44.901298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.897 [2024-11-06 14:08:44.901308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.897 qpair failed and we were unable to recover it. 00:25:05.897 [2024-11-06 14:08:44.901594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.897 [2024-11-06 14:08:44.901604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.897 qpair failed and we were unable to recover it. 00:25:05.897 [2024-11-06 14:08:44.901777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.897 [2024-11-06 14:08:44.901787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.897 qpair failed and we were unable to recover it. 00:25:05.897 [2024-11-06 14:08:44.902089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.897 [2024-11-06 14:08:44.902098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.897 qpair failed and we were unable to recover it. 00:25:05.897 [2024-11-06 14:08:44.902269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.897 [2024-11-06 14:08:44.902279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.897 qpair failed and we were unable to recover it. 00:25:05.897 [2024-11-06 14:08:44.902562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.897 [2024-11-06 14:08:44.902573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.897 qpair failed and we were unable to recover it. 00:25:05.897 [2024-11-06 14:08:44.902920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.897 [2024-11-06 14:08:44.902930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.897 qpair failed and we were unable to recover it. 00:25:05.897 [2024-11-06 14:08:44.903219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.897 [2024-11-06 14:08:44.903229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.897 qpair failed and we were unable to recover it. 00:25:05.897 [2024-11-06 14:08:44.903579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.897 [2024-11-06 14:08:44.903589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.897 qpair failed and we were unable to recover it. 00:25:05.897 [2024-11-06 14:08:44.903872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.897 [2024-11-06 14:08:44.903882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.897 qpair failed and we were unable to recover it. 00:25:05.897 [2024-11-06 14:08:44.904196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.897 [2024-11-06 14:08:44.904207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.897 qpair failed and we were unable to recover it. 00:25:05.897 [2024-11-06 14:08:44.904534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.897 [2024-11-06 14:08:44.904544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.897 qpair failed and we were unable to recover it. 00:25:05.897 [2024-11-06 14:08:44.904757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.897 [2024-11-06 14:08:44.904767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.897 qpair failed and we were unable to recover it. 00:25:05.897 [2024-11-06 14:08:44.905082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.897 [2024-11-06 14:08:44.905093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.897 qpair failed and we were unable to recover it. 00:25:05.897 [2024-11-06 14:08:44.905268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.897 [2024-11-06 14:08:44.905279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.897 qpair failed and we were unable to recover it. 00:25:05.897 [2024-11-06 14:08:44.905607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.897 [2024-11-06 14:08:44.905617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.897 qpair failed and we were unable to recover it. 00:25:05.897 [2024-11-06 14:08:44.905945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.897 [2024-11-06 14:08:44.905956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.897 qpair failed and we were unable to recover it. 00:25:05.897 [2024-11-06 14:08:44.906133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.897 [2024-11-06 14:08:44.906144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.897 qpair failed and we were unable to recover it. 00:25:05.897 [2024-11-06 14:08:44.906309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.897 [2024-11-06 14:08:44.906321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.897 qpair failed and we were unable to recover it. 00:25:05.897 [2024-11-06 14:08:44.906657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.897 [2024-11-06 14:08:44.906667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.897 qpair failed and we were unable to recover it. 00:25:05.897 [2024-11-06 14:08:44.906973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.897 [2024-11-06 14:08:44.906983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.897 qpair failed and we were unable to recover it. 00:25:05.897 [2024-11-06 14:08:44.907283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.897 [2024-11-06 14:08:44.907294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.897 qpair failed and we were unable to recover it. 00:25:05.897 [2024-11-06 14:08:44.907631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.897 [2024-11-06 14:08:44.907642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.897 qpair failed and we were unable to recover it. 00:25:05.897 [2024-11-06 14:08:44.907929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.897 [2024-11-06 14:08:44.907939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.897 qpair failed and we were unable to recover it. 00:25:05.897 [2024-11-06 14:08:44.908236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.897 [2024-11-06 14:08:44.908252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.897 qpair failed and we were unable to recover it. 00:25:05.897 [2024-11-06 14:08:44.908604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.897 [2024-11-06 14:08:44.908614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.897 qpair failed and we were unable to recover it. 00:25:05.897 [2024-11-06 14:08:44.908655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.897 [2024-11-06 14:08:44.908664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.897 qpair failed and we were unable to recover it. 00:25:05.897 [2024-11-06 14:08:44.908852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.897 [2024-11-06 14:08:44.908862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.897 qpair failed and we were unable to recover it. 00:25:05.897 [2024-11-06 14:08:44.909162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.897 [2024-11-06 14:08:44.909172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.909507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.909517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.909874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.909884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.910231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.910241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.910595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.910606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.910809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.910819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.911115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.911125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.911473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.911483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.911785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.911795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.911947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.911959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.912306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.912317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.912512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.912522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.912900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.912910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.913202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.913212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.913387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.913398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.913734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.913745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.914063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.914073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.914283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.914296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.914631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.914641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.914814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.914824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.915195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.915205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.915558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.915570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.915743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.915755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.916063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.916074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.916378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.916389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.916587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.916597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.916948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.916959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.917330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.917340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.917611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.917621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.917802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.917812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.918152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.918162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.918472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.918483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.918776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.918786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.918984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.918994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.919377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.919388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.919697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.919707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.920042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.920051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.920387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.920397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.920708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.898 [2024-11-06 14:08:44.920718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.898 qpair failed and we were unable to recover it. 00:25:05.898 [2024-11-06 14:08:44.921008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.921018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.921314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.921324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.921654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.921664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.921962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.921972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.922297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.922307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.922471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.922481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.922805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.922816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.922984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.922994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.923037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.923046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.923360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.923370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.923693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.923703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.924035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.924045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.924207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.924217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.924544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.924556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.924885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.924896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.925189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.925199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.925364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.925374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.925580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.925589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.925897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.925907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.926217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.926228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.926413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.926425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.926766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.926776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.926999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.927009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.927317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.927328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.927727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.927737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.927953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.927963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.928067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.928076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.928374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.928384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.928575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.928585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.928904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.928915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.929097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.929107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.929403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.929414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.929609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.929619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.929862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.929872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.930197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.930208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.930511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.930521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.930822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.930832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.931113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.931123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.931546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.899 [2024-11-06 14:08:44.931556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.899 qpair failed and we were unable to recover it. 00:25:05.899 [2024-11-06 14:08:44.931857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.931867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.932158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.932168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.932231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.932242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.932518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.932528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.932845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.932855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.933187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.933198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.933369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.933379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.933723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.933735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.933899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.933909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.934272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.934282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.934593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.934603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.934981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.934992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.935296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.935306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.935489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.935499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.935693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.935702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.936060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.936070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.936378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.936389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.936753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.936763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.937067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.937078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.937284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.937295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.937566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.937576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.937907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.937918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.938210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.938220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.938533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.938543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.938866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.938877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.939068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.939078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.939248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.939258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.939550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.939560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.939849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.939858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.940056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.940065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.940279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.940290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.940608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.940618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.940958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.940968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.941266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.941277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.941595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.941608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.941941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.941951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.942092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.942102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.942341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.942352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.900 [2024-11-06 14:08:44.942517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.900 [2024-11-06 14:08:44.942528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.900 qpair failed and we were unable to recover it. 00:25:05.901 [2024-11-06 14:08:44.942714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.901 [2024-11-06 14:08:44.942725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.901 qpair failed and we were unable to recover it. 00:25:05.901 [2024-11-06 14:08:44.942996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.901 [2024-11-06 14:08:44.943007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.901 qpair failed and we were unable to recover it. 00:25:05.901 [2024-11-06 14:08:44.943329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.901 [2024-11-06 14:08:44.943339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.901 qpair failed and we were unable to recover it. 00:25:05.901 [2024-11-06 14:08:44.943630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.901 [2024-11-06 14:08:44.943640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.901 qpair failed and we were unable to recover it. 00:25:05.901 [2024-11-06 14:08:44.943956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.901 [2024-11-06 14:08:44.943966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.901 qpair failed and we were unable to recover it. 00:25:05.901 [2024-11-06 14:08:44.944147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.901 [2024-11-06 14:08:44.944156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.901 qpair failed and we were unable to recover it. 00:25:05.901 [2024-11-06 14:08:44.944460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.901 [2024-11-06 14:08:44.944471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.901 qpair failed and we were unable to recover it. 00:25:05.901 [2024-11-06 14:08:44.944800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.901 [2024-11-06 14:08:44.944810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.901 qpair failed and we were unable to recover it. 00:25:05.901 [2024-11-06 14:08:44.945145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.901 [2024-11-06 14:08:44.945155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.901 qpair failed and we were unable to recover it. 00:25:05.901 [2024-11-06 14:08:44.945458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.901 [2024-11-06 14:08:44.945468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.901 qpair failed and we were unable to recover it. 00:25:05.901 [2024-11-06 14:08:44.945756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.901 [2024-11-06 14:08:44.945767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.901 qpair failed and we were unable to recover it. 00:25:05.901 [2024-11-06 14:08:44.945961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.901 [2024-11-06 14:08:44.945971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.901 qpair failed and we were unable to recover it. 00:25:05.901 [2024-11-06 14:08:44.946339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.901 [2024-11-06 14:08:44.946349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.901 qpair failed and we were unable to recover it. 00:25:05.901 [2024-11-06 14:08:44.946566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.901 [2024-11-06 14:08:44.946575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.901 qpair failed and we were unable to recover it. 00:25:05.901 [2024-11-06 14:08:44.946893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.901 [2024-11-06 14:08:44.946902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.901 qpair failed and we were unable to recover it. 00:25:05.901 [2024-11-06 14:08:44.947115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.901 [2024-11-06 14:08:44.947125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.901 qpair failed and we were unable to recover it. 00:25:05.901 [2024-11-06 14:08:44.947422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.901 [2024-11-06 14:08:44.947433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.901 qpair failed and we were unable to recover it. 00:25:05.901 [2024-11-06 14:08:44.947655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.901 [2024-11-06 14:08:44.947664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.901 qpair failed and we were unable to recover it. 00:25:05.901 [2024-11-06 14:08:44.947880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.901 [2024-11-06 14:08:44.947889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.901 qpair failed and we were unable to recover it. 00:25:05.901 [2024-11-06 14:08:44.948290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.901 [2024-11-06 14:08:44.948300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.901 qpair failed and we were unable to recover it. 00:25:05.901 [2024-11-06 14:08:44.948633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.901 [2024-11-06 14:08:44.948643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.901 qpair failed and we were unable to recover it. 00:25:05.901 [2024-11-06 14:08:44.948964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.901 [2024-11-06 14:08:44.948974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.901 qpair failed and we were unable to recover it. 00:25:05.901 [2024-11-06 14:08:44.949280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.901 [2024-11-06 14:08:44.949291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.901 qpair failed and we were unable to recover it. 00:25:05.901 [2024-11-06 14:08:44.949608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.949618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.949799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.949809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.950170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.950181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.950487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.950498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.950836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.950846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.951041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.951051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.951232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.951242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.951441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.951451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.951730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.951740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.952074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.952085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.952382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.952393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.952697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.952708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.953017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.953027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.953230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.953241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.953537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.953548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.953721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.953731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.954060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.954070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.954251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.954261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.954589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.954599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.954936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.954946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.955229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.955239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.955611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.955621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.955799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.955809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.956171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.956183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.956541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.956551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.956840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.956850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.957148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.957159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.957490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.957500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.957671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.957681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.958038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.958048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.958379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.958389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.958762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.958772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.958917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.958927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.959103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.959113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.959294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.959304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.959657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.959667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.960057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.960068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.960253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.960264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.902 qpair failed and we were unable to recover it. 00:25:05.902 [2024-11-06 14:08:44.960576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.902 [2024-11-06 14:08:44.960585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.960915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.960926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.961101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.961113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.961478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.961488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.961671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.961681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.961871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.961881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.962185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.962195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.962370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.962381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.962675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.962686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.962978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.962988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.963297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.963307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.963630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.963640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.963951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.963961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.964157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.964167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.964519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.964530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.964833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.964843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.965034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.965043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.965386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.965396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.965694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.965705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.966013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.966023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.966277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.966288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.966415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.966425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.966735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.966745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.967177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.967187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.967508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.967519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.967840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.967851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.968042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.968053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.968349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.968360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.968717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.968727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.969052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.969065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.969436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.969447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.969676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.969686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.969988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.969999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.970189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.970199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.970424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.970435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.970771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.970781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.971114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.971125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.971441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.971452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.971635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.903 [2024-11-06 14:08:44.971645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.903 qpair failed and we were unable to recover it. 00:25:05.903 [2024-11-06 14:08:44.971927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.971937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.972347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.972357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.972679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.972689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.972878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.972888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.973119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.973129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.973523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.973535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.973892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.973902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.974093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.974104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.974303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.974315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.974632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.974643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.974936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.974946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.975252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.975262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.975640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.975650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.975933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.975944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.976129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.976139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.976381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.976391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.976681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.976692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.977010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.977023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.977190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.977200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.977427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.977438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.977728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.977739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.977901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.977912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.978267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.978278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.978593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.978603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.978933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.978943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.979258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.979269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.979462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.979473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.979514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.979523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.979729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.979739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.979969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.979981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.980349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.980360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.980557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.980568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.980963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.980974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.981144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.981154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.981547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.981558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.981866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.981877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.982075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.982085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.982501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.982512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.904 [2024-11-06 14:08:44.982817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.904 [2024-11-06 14:08:44.982827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.904 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.983011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.983024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.983310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.983322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.983688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.983699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.984095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.984106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.984476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.984486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.984793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.984803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.985000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.985010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.985380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.985392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.985590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.985600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.985963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.985973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.986280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.986291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.986605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.986616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.986940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.986951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.987275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.987286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.987602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.987612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.987777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.987787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.988159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.988170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.988348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.988359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.988552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.988562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.988851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.988862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.989186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.989196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.989492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.989503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.989811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.989822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.990155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.990165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.990331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.990342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.990640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.990651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.991026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.991036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.991337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.991347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.991656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.991666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.991991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.992002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.992326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.992338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.992622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.992632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.993011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.993022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.993410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.993422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.993734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.993744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.994048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.994059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.994358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.994370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.994557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.994567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.905 [2024-11-06 14:08:44.994754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.905 [2024-11-06 14:08:44.994764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.905 qpair failed and we were unable to recover it. 00:25:05.906 [2024-11-06 14:08:44.995103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.906 [2024-11-06 14:08:44.995113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.906 qpair failed and we were unable to recover it. 00:25:05.906 [2024-11-06 14:08:44.995303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.906 [2024-11-06 14:08:44.995313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.906 qpair failed and we were unable to recover it. 00:25:05.906 [2024-11-06 14:08:44.995657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.906 [2024-11-06 14:08:44.995667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.906 qpair failed and we were unable to recover it. 00:25:05.906 [2024-11-06 14:08:44.995977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.906 [2024-11-06 14:08:44.995987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.906 qpair failed and we were unable to recover it. 00:25:05.906 [2024-11-06 14:08:44.996276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.906 [2024-11-06 14:08:44.996288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.906 qpair failed and we were unable to recover it. 00:25:05.906 [2024-11-06 14:08:44.996468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.906 [2024-11-06 14:08:44.996478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.906 qpair failed and we were unable to recover it. 00:25:05.906 [2024-11-06 14:08:44.996846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.906 [2024-11-06 14:08:44.996857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.906 qpair failed and we were unable to recover it. 00:25:05.906 [2024-11-06 14:08:44.997134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.906 [2024-11-06 14:08:44.997146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.906 qpair failed and we were unable to recover it. 00:25:05.906 [2024-11-06 14:08:44.997500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.906 [2024-11-06 14:08:44.997511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.906 qpair failed and we were unable to recover it. 00:25:05.906 [2024-11-06 14:08:44.997674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.906 [2024-11-06 14:08:44.997684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.906 qpair failed and we were unable to recover it. 00:25:05.906 [2024-11-06 14:08:44.998035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.906 [2024-11-06 14:08:44.998046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.906 qpair failed and we were unable to recover it. 00:25:05.906 [2024-11-06 14:08:44.998431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.906 [2024-11-06 14:08:44.998442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.906 qpair failed and we were unable to recover it. 00:25:05.906 [2024-11-06 14:08:44.998631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.906 [2024-11-06 14:08:44.998641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.906 qpair failed and we were unable to recover it. 00:25:05.906 [2024-11-06 14:08:44.998848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.906 [2024-11-06 14:08:44.998858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.906 qpair failed and we were unable to recover it. 00:25:05.906 [2024-11-06 14:08:44.999180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.906 [2024-11-06 14:08:44.999190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.906 qpair failed and we were unable to recover it. 00:25:05.906 [2024-11-06 14:08:44.999517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.906 [2024-11-06 14:08:44.999528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.906 qpair failed and we were unable to recover it. 00:25:05.906 [2024-11-06 14:08:44.999833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.906 [2024-11-06 14:08:44.999844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.906 qpair failed and we were unable to recover it. 00:25:05.906 [2024-11-06 14:08:45.000027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.906 [2024-11-06 14:08:45.000037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.906 qpair failed and we were unable to recover it. 00:25:05.906 [2024-11-06 14:08:45.000376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.906 [2024-11-06 14:08:45.000389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.906 qpair failed and we were unable to recover it. 00:25:05.906 [2024-11-06 14:08:45.000736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.906 [2024-11-06 14:08:45.000748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.906 qpair failed and we were unable to recover it. 00:25:05.906 [2024-11-06 14:08:45.001144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.906 [2024-11-06 14:08:45.001154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.906 qpair failed and we were unable to recover it. 00:25:05.906 [2024-11-06 14:08:45.001319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.906 [2024-11-06 14:08:45.001331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.906 qpair failed and we were unable to recover it. 00:25:05.906 [2024-11-06 14:08:45.001666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.906 [2024-11-06 14:08:45.001677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.906 qpair failed and we were unable to recover it. 00:25:05.906 [2024-11-06 14:08:45.001880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.906 [2024-11-06 14:08:45.001890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.906 qpair failed and we were unable to recover it. 00:25:05.906 [2024-11-06 14:08:45.002215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.906 [2024-11-06 14:08:45.002227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.906 qpair failed and we were unable to recover it. 00:25:05.906 [2024-11-06 14:08:45.002583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.906 [2024-11-06 14:08:45.002594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.906 qpair failed and we were unable to recover it. 00:25:05.906 [2024-11-06 14:08:45.002905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.906 [2024-11-06 14:08:45.002916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.906 qpair failed and we were unable to recover it. 00:25:05.906 [2024-11-06 14:08:45.003201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.906 [2024-11-06 14:08:45.003211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.906 qpair failed and we were unable to recover it. 00:25:05.906 [2024-11-06 14:08:45.003507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.906 [2024-11-06 14:08:45.003518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.906 qpair failed and we were unable to recover it. 00:25:05.906 [2024-11-06 14:08:45.003822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.906 [2024-11-06 14:08:45.003833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.906 qpair failed and we were unable to recover it. 00:25:05.906 [2024-11-06 14:08:45.004144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.906 [2024-11-06 14:08:45.004154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.906 qpair failed and we were unable to recover it. 00:25:05.906 [2024-11-06 14:08:45.004476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.906 [2024-11-06 14:08:45.004487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.906 qpair failed and we were unable to recover it. 00:25:05.906 [2024-11-06 14:08:45.004787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.906 [2024-11-06 14:08:45.004798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.906 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.004968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.004980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.005277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.005293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.005495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.005506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.005832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.005842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.006146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.006156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.006363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.006374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.006573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.006583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.006821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.006831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.007147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.007159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.007466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.007477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.007660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.007670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.007986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.007998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.008180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.008192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.008628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.008638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.008948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.008958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.009276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.009287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.009592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.009603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.009894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.009904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.010085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.010094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.010444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.010455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.010786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.010796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.010837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.010846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.011021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.011031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.011344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.011354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.011647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.011657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.011956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.011966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.012140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.012155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.012350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.012360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.012648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.012659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.012972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.012982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.013160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.013171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.013498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.013509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.013715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.013725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.014067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.014077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.014265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.014275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.014525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.014535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.014851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.014861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.015060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.015069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.907 qpair failed and we were unable to recover it. 00:25:05.907 [2024-11-06 14:08:45.015236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.907 [2024-11-06 14:08:45.015250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.015617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.015627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.015975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.015985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.016151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.016162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.016508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.016518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.016822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.016832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.017152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.017161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.017554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.017564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.017890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.017900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.018205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.018215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.018524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.018534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.018580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.018588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.018928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.018938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.019116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.019126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.019482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.019493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.019780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.019790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.020154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.020164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.020488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.020498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.020796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.020806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.021173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.021183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.021351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.021363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.021563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.021573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.021910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.021920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.022208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.022219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.022541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.022551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.022723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.022732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.022900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.022910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.023229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.023239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.023566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.023576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.023740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.023749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.023830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.023840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.024137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.024149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.024459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.024470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.024765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.024775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.025085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.025094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.025264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.025274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.025648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.025658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.025976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.025986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.026150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.908 [2024-11-06 14:08:45.026160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.908 qpair failed and we were unable to recover it. 00:25:05.908 [2024-11-06 14:08:45.026540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.026550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.026766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.026776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.027224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.027234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.027417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.027427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.027835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.027845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.028002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.028011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.028198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.028208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.028257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.028268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.028666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.028676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.029005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.029015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.029313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.029323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.029651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.029661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.029975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.029985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.030310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.030320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.030361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.030370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.030411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.030420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.030765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.030775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.031134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.031144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.031452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.031462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.031791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.031803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.032115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.032124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.032505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.032515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.032689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.032698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.032923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.032933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.033247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.033258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.033464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.033473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.033792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.033802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.034109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.034118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.034356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.034366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.034704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.034714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.035030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.035039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.035370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.035380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.035685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.035695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.035742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.035751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.036059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.036070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.036257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.036268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.036485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.036496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.036846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.036856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.909 [2024-11-06 14:08:45.037179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.909 [2024-11-06 14:08:45.037188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.909 qpair failed and we were unable to recover it. 00:25:05.910 [2024-11-06 14:08:45.037414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.910 [2024-11-06 14:08:45.037424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.910 qpair failed and we were unable to recover it. 00:25:05.910 [2024-11-06 14:08:45.037806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.910 [2024-11-06 14:08:45.037815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.910 qpair failed and we were unable to recover it. 00:25:05.910 [2024-11-06 14:08:45.037989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.910 [2024-11-06 14:08:45.037999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.910 qpair failed and we were unable to recover it. 00:25:05.910 [2024-11-06 14:08:45.038340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.910 [2024-11-06 14:08:45.038350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.910 qpair failed and we were unable to recover it. 00:25:05.910 [2024-11-06 14:08:45.038511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.910 [2024-11-06 14:08:45.038520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.910 qpair failed and we were unable to recover it. 00:25:05.910 [2024-11-06 14:08:45.038724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.910 [2024-11-06 14:08:45.038735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.910 qpair failed and we were unable to recover it. 00:25:05.910 14:08:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:05.910 [2024-11-06 14:08:45.039030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.910 [2024-11-06 14:08:45.039041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.910 qpair failed and we were unable to recover it. 00:25:05.910 14:08:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:25:05.910 [2024-11-06 14:08:45.039370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.910 [2024-11-06 14:08:45.039382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.910 qpair failed and we were unable to recover it. 00:25:05.910 14:08:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:05.910 14:08:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:05.910 14:08:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:05.910 [2024-11-06 14:08:45.039712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.910 [2024-11-06 14:08:45.039722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.910 qpair failed and we were unable to recover it. 00:25:05.910 [2024-11-06 14:08:45.040058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.910 [2024-11-06 14:08:45.040068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.910 qpair failed and we were unable to recover it. 00:25:05.910 [2024-11-06 14:08:45.040115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.910 [2024-11-06 14:08:45.040123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.910 qpair failed and we were unable to recover it. 00:25:05.910 [2024-11-06 14:08:45.040447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.910 [2024-11-06 14:08:45.040458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.910 qpair failed and we were unable to recover it. 00:25:05.910 [2024-11-06 14:08:45.040630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.910 [2024-11-06 14:08:45.040641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.910 qpair failed and we were unable to recover it. 00:25:05.910 [2024-11-06 14:08:45.040974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.910 [2024-11-06 14:08:45.040984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.910 qpair failed and we were unable to recover it. 00:25:05.910 [2024-11-06 14:08:45.041265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.910 [2024-11-06 14:08:45.041276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.910 qpair failed and we were unable to recover it. 00:25:05.910 [2024-11-06 14:08:45.041607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.910 [2024-11-06 14:08:45.041618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.910 qpair failed and we were unable to recover it. 00:25:05.910 [2024-11-06 14:08:45.041937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.910 [2024-11-06 14:08:45.041947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.910 qpair failed and we were unable to recover it. 00:25:05.910 [2024-11-06 14:08:45.042252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.910 [2024-11-06 14:08:45.042262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.910 qpair failed and we were unable to recover it. 00:25:05.910 [2024-11-06 14:08:45.042464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.910 [2024-11-06 14:08:45.042475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.910 qpair failed and we were unable to recover it. 00:25:05.910 [2024-11-06 14:08:45.042833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.910 [2024-11-06 14:08:45.042845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.910 qpair failed and we were unable to recover it. 00:25:05.910 [2024-11-06 14:08:45.043004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.910 [2024-11-06 14:08:45.043015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.910 qpair failed and we were unable to recover it. 00:25:05.910 [2024-11-06 14:08:45.043182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.910 [2024-11-06 14:08:45.043192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.910 qpair failed and we were unable to recover it. 00:25:05.910 [2024-11-06 14:08:45.043522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.910 [2024-11-06 14:08:45.043533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.910 qpair failed and we were unable to recover it. 00:25:05.910 [2024-11-06 14:08:45.043896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.910 [2024-11-06 14:08:45.043906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.910 qpair failed and we were unable to recover it. 00:25:05.910 [2024-11-06 14:08:45.044246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.910 [2024-11-06 14:08:45.044257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.910 qpair failed and we were unable to recover it. 00:25:05.910 [2024-11-06 14:08:45.044432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.910 [2024-11-06 14:08:45.044442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.910 qpair failed and we were unable to recover it. 00:25:05.910 [2024-11-06 14:08:45.044609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.910 [2024-11-06 14:08:45.044620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.910 qpair failed and we were unable to recover it. 00:25:05.910 [2024-11-06 14:08:45.044809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.910 [2024-11-06 14:08:45.044820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.910 qpair failed and we were unable to recover it. 00:25:05.910 [2024-11-06 14:08:45.045134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.910 [2024-11-06 14:08:45.045145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.910 qpair failed and we were unable to recover it. 00:25:05.910 [2024-11-06 14:08:45.045318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.910 [2024-11-06 14:08:45.045329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.910 qpair failed and we were unable to recover it. 00:25:05.910 [2024-11-06 14:08:45.045621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.910 [2024-11-06 14:08:45.045632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.910 qpair failed and we were unable to recover it. 00:25:05.910 [2024-11-06 14:08:45.045795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.910 [2024-11-06 14:08:45.045805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.910 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.046022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.046034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.046408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.046419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.046751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.046762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.047060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.047070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.047232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.047241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.047438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.047450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.047742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.047752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.048070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.048081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.048399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.048410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.048491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.048500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.048838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.048849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.049171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.049182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.049494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.049504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.049688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.049698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.049892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.049903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.050100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.050111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.050474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.050484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.050788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.050799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.051128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.051138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.051506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.051516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.051925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.051935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.052109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.052120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.052458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.052468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.052667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.052677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.052972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.052983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.053314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.053324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.053617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.053626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.053814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.053825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.054183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.054193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.054494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.054505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.054886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.054896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.055185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.055195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.055379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.055390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.055743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.055753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.055797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.055806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.056125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.056136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.056297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.056307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.056521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.911 [2024-11-06 14:08:45.056530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.911 qpair failed and we were unable to recover it. 00:25:05.911 [2024-11-06 14:08:45.056774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.912 [2024-11-06 14:08:45.056784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.912 qpair failed and we were unable to recover it. 00:25:05.912 [2024-11-06 14:08:45.056978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.912 [2024-11-06 14:08:45.056989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.912 qpair failed and we were unable to recover it. 00:25:05.912 [2024-11-06 14:08:45.057184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.912 [2024-11-06 14:08:45.057195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.912 qpair failed and we were unable to recover it. 00:25:05.912 [2024-11-06 14:08:45.057531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.912 [2024-11-06 14:08:45.057543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.912 qpair failed and we were unable to recover it. 00:25:05.912 [2024-11-06 14:08:45.057733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.912 [2024-11-06 14:08:45.057747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.912 qpair failed and we were unable to recover it. 00:25:05.912 [2024-11-06 14:08:45.057927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.912 [2024-11-06 14:08:45.057937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.912 qpair failed and we were unable to recover it. 00:25:05.912 [2024-11-06 14:08:45.058275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.912 [2024-11-06 14:08:45.058287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.912 qpair failed and we were unable to recover it. 00:25:05.912 [2024-11-06 14:08:45.058625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.912 [2024-11-06 14:08:45.058635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.912 qpair failed and we were unable to recover it. 00:25:05.912 [2024-11-06 14:08:45.058802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.912 [2024-11-06 14:08:45.058812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.912 qpair failed and we were unable to recover it. 00:25:05.912 [2024-11-06 14:08:45.059083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.912 [2024-11-06 14:08:45.059094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.912 qpair failed and we were unable to recover it. 00:25:05.912 [2024-11-06 14:08:45.059400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.912 [2024-11-06 14:08:45.059411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.912 qpair failed and we were unable to recover it. 00:25:05.912 [2024-11-06 14:08:45.059585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.912 [2024-11-06 14:08:45.059595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.912 qpair failed and we were unable to recover it. 00:25:05.912 [2024-11-06 14:08:45.059642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.912 [2024-11-06 14:08:45.059652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.912 qpair failed and we were unable to recover it. 00:25:05.912 [2024-11-06 14:08:45.059939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.912 [2024-11-06 14:08:45.059949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.912 qpair failed and we were unable to recover it. 00:25:05.912 [2024-11-06 14:08:45.060290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.912 [2024-11-06 14:08:45.060301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.912 qpair failed and we were unable to recover it. 00:25:05.912 [2024-11-06 14:08:45.060634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.912 [2024-11-06 14:08:45.060644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.912 qpair failed and we were unable to recover it. 00:25:05.912 [2024-11-06 14:08:45.060939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.912 [2024-11-06 14:08:45.060949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.912 qpair failed and we were unable to recover it. 00:25:05.912 [2024-11-06 14:08:45.061154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.912 [2024-11-06 14:08:45.061165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.912 qpair failed and we were unable to recover it. 00:25:05.912 [2024-11-06 14:08:45.061353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.912 [2024-11-06 14:08:45.061364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.912 qpair failed and we were unable to recover it. 00:25:05.912 [2024-11-06 14:08:45.061749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.912 [2024-11-06 14:08:45.061760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.912 qpair failed and we were unable to recover it. 00:25:05.912 [2024-11-06 14:08:45.062047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.912 [2024-11-06 14:08:45.062057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.912 qpair failed and we were unable to recover it. 00:25:05.912 [2024-11-06 14:08:45.062474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.912 [2024-11-06 14:08:45.062485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.912 qpair failed and we were unable to recover it. 00:25:05.912 [2024-11-06 14:08:45.062658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.912 [2024-11-06 14:08:45.062669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.912 qpair failed and we were unable to recover it. 00:25:05.912 [2024-11-06 14:08:45.062958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.912 [2024-11-06 14:08:45.062968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.912 qpair failed and we were unable to recover it. 00:25:05.912 [2024-11-06 14:08:45.063151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.912 [2024-11-06 14:08:45.063161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.912 qpair failed and we were unable to recover it. 00:25:05.912 [2024-11-06 14:08:45.063361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.912 [2024-11-06 14:08:45.063372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.912 qpair failed and we were unable to recover it. 00:25:05.912 [2024-11-06 14:08:45.063693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.912 [2024-11-06 14:08:45.063704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.912 qpair failed and we were unable to recover it. 00:25:05.912 [2024-11-06 14:08:45.064000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.912 [2024-11-06 14:08:45.064009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.912 qpair failed and we were unable to recover it. 00:25:05.912 14:08:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:05.912 [2024-11-06 14:08:45.064200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.912 [2024-11-06 14:08:45.064211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.912 qpair failed and we were unable to recover it. 00:25:05.912 14:08:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:05.912 14:08:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.912 [2024-11-06 14:08:45.064557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.912 [2024-11-06 14:08:45.064569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.912 qpair failed and we were unable to recover it. 00:25:05.912 14:08:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:05.912 [2024-11-06 14:08:45.064905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.912 [2024-11-06 14:08:45.064915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.912 qpair failed and we were unable to recover it. 00:25:05.912 [2024-11-06 14:08:45.065203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.912 [2024-11-06 14:08:45.065212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.912 qpair failed and we were unable to recover it. 00:25:05.912 [2024-11-06 14:08:45.065413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.912 [2024-11-06 14:08:45.065423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.912 qpair failed and we were unable to recover it. 00:25:05.912 [2024-11-06 14:08:45.065773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.912 [2024-11-06 14:08:45.065783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.912 qpair failed and we were unable to recover it. 00:25:05.912 [2024-11-06 14:08:45.066072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.912 [2024-11-06 14:08:45.066081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.912 qpair failed and we were unable to recover it. 00:25:05.912 [2024-11-06 14:08:45.066334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.912 [2024-11-06 14:08:45.066344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.912 qpair failed and we were unable to recover it. 00:25:05.912 [2024-11-06 14:08:45.066532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.912 [2024-11-06 14:08:45.066542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.912 qpair failed and we were unable to recover it. 00:25:05.912 [2024-11-06 14:08:45.066754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.066764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.066942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.066952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.067226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.067236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.067418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.067428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.067630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.067640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.067970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.067980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.068311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.068322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.068699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.068708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.069049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.069059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.069388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.069399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.069739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.069749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.069889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.069900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.070177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.070187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.070537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.070548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.070863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.070873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.071212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.071222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.071638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.071649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.071847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.071857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.072039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.072051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.072359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.072369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.072564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.072574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.072864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.072874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.073200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.073209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.073517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.073527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.073700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.073710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.074039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.074049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.074402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.074412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.074691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.074701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.075086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.075095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.075400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.075411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.075691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.075701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.076007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.076017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.076182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.076192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.076373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.076384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.076592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.076602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.076904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.076913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.077092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.077102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.077423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.077434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.077807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.913 [2024-11-06 14:08:45.077817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.913 qpair failed and we were unable to recover it. 00:25:05.913 [2024-11-06 14:08:45.078117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.078126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.078431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.078442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.078738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.078749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.078903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.078913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.079099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.079109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.079441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.079451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.079788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.079798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.080125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.080135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.080309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.080321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.080549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.080560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.080906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.080916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.081087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.081097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.081412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.081423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.081768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.081778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.081978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.081988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.082168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.082178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.082369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.082380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.082725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.082735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.083056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.083066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.083450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.083460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.083785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.083795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.084116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.084126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.084309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.084320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.084651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.084662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.084857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.084868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.085064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.085074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.085397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.085407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.085601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.085611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.085807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.085817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.086159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.086170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.086466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.086477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.086770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.086780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.087093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.087104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.087460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.087470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.087653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.087663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.087844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.087854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.088181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.088191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.088535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.088545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.088932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.914 [2024-11-06 14:08:45.088941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.914 qpair failed and we were unable to recover it. 00:25:05.914 [2024-11-06 14:08:45.089148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.915 [2024-11-06 14:08:45.089158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.915 qpair failed and we were unable to recover it. 00:25:05.915 [2024-11-06 14:08:45.089467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.915 [2024-11-06 14:08:45.089478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.915 qpair failed and we were unable to recover it. 00:25:05.915 [2024-11-06 14:08:45.089816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.915 [2024-11-06 14:08:45.089827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.915 qpair failed and we were unable to recover it. 00:25:05.915 [2024-11-06 14:08:45.090109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.915 [2024-11-06 14:08:45.090119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.915 qpair failed and we were unable to recover it. 00:25:05.915 [2024-11-06 14:08:45.090417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.915 [2024-11-06 14:08:45.090428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.915 qpair failed and we were unable to recover it. 00:25:05.915 [2024-11-06 14:08:45.090601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.915 [2024-11-06 14:08:45.090611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.915 qpair failed and we were unable to recover it. 00:25:05.915 [2024-11-06 14:08:45.090890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.915 [2024-11-06 14:08:45.090899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.915 qpair failed and we were unable to recover it. 00:25:05.915 [2024-11-06 14:08:45.091213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.915 [2024-11-06 14:08:45.091223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.915 qpair failed and we were unable to recover it. 00:25:05.915 [2024-11-06 14:08:45.091593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.915 [2024-11-06 14:08:45.091606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.915 qpair failed and we were unable to recover it. 00:25:05.915 [2024-11-06 14:08:45.091798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.915 [2024-11-06 14:08:45.091808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.915 qpair failed and we were unable to recover it. 00:25:05.915 [2024-11-06 14:08:45.092131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.915 [2024-11-06 14:08:45.092142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.915 qpair failed and we were unable to recover it. 00:25:05.915 [2024-11-06 14:08:45.092446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.915 [2024-11-06 14:08:45.092456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.915 qpair failed and we were unable to recover it. 00:25:05.915 [2024-11-06 14:08:45.092784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.915 [2024-11-06 14:08:45.092793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.915 qpair failed and we were unable to recover it. 00:25:05.915 [2024-11-06 14:08:45.093126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.915 [2024-11-06 14:08:45.093136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.915 qpair failed and we were unable to recover it. 00:25:05.915 [2024-11-06 14:08:45.093303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.915 [2024-11-06 14:08:45.093313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.915 qpair failed and we were unable to recover it. 00:25:05.915 [2024-11-06 14:08:45.093653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.915 [2024-11-06 14:08:45.093663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.915 qpair failed and we were unable to recover it. 00:25:05.915 [2024-11-06 14:08:45.093817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.915 [2024-11-06 14:08:45.093826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.915 qpair failed and we were unable to recover it. 00:25:05.915 [2024-11-06 14:08:45.094010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.915 [2024-11-06 14:08:45.094020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.915 qpair failed and we were unable to recover it. 00:25:05.915 [2024-11-06 14:08:45.094316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.915 [2024-11-06 14:08:45.094326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.915 qpair failed and we were unable to recover it. 00:25:05.915 [2024-11-06 14:08:45.094622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.915 [2024-11-06 14:08:45.094632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.915 qpair failed and we were unable to recover it. 00:25:05.915 [2024-11-06 14:08:45.094809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.915 [2024-11-06 14:08:45.094818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.915 qpair failed and we were unable to recover it. 00:25:05.915 [2024-11-06 14:08:45.095171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.915 [2024-11-06 14:08:45.095181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.915 qpair failed and we were unable to recover it. 00:25:05.915 [2024-11-06 14:08:45.095380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.915 [2024-11-06 14:08:45.095391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.915 qpair failed and we were unable to recover it. 00:25:05.915 [2024-11-06 14:08:45.095719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.915 [2024-11-06 14:08:45.095728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.915 qpair failed and we were unable to recover it. 00:25:05.915 [2024-11-06 14:08:45.095876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.915 [2024-11-06 14:08:45.095885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.915 qpair failed and we were unable to recover it. 00:25:05.915 [2024-11-06 14:08:45.096220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.915 [2024-11-06 14:08:45.096230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.915 qpair failed and we were unable to recover it. 00:25:05.915 Malloc0 00:25:05.915 [2024-11-06 14:08:45.096630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.915 [2024-11-06 14:08:45.096640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.915 qpair failed and we were unable to recover it. 00:25:05.915 [2024-11-06 14:08:45.096819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.915 [2024-11-06 14:08:45.096829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.915 qpair failed and we were unable to recover it. 00:25:05.915 14:08:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.915 [2024-11-06 14:08:45.097156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.915 [2024-11-06 14:08:45.097166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.915 qpair failed and we were unable to recover it. 00:25:05.915 [2024-11-06 14:08:45.097472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.915 [2024-11-06 14:08:45.097483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.915 14:08:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:05.915 qpair failed and we were unable to recover it. 00:25:05.915 14:08:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.915 [2024-11-06 14:08:45.097891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.915 [2024-11-06 14:08:45.097902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.915 qpair failed and we were unable to recover it. 00:25:05.915 14:08:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:05.915 [2024-11-06 14:08:45.098186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.915 [2024-11-06 14:08:45.098197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.915 qpair failed and we were unable to recover it. 00:25:05.916 [2024-11-06 14:08:45.098239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.098253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.916 qpair failed and we were unable to recover it. 00:25:05.916 [2024-11-06 14:08:45.098573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.098583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.916 qpair failed and we were unable to recover it. 00:25:05.916 [2024-11-06 14:08:45.098908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.098918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.916 qpair failed and we were unable to recover it. 00:25:05.916 [2024-11-06 14:08:45.099329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.099340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.916 qpair failed and we were unable to recover it. 00:25:05.916 [2024-11-06 14:08:45.099507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.099517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.916 qpair failed and we were unable to recover it. 00:25:05.916 [2024-11-06 14:08:45.099776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.099786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.916 qpair failed and we were unable to recover it. 00:25:05.916 [2024-11-06 14:08:45.100151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.100160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.916 qpair failed and we were unable to recover it. 00:25:05.916 [2024-11-06 14:08:45.100475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.100486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.916 qpair failed and we were unable to recover it. 00:25:05.916 [2024-11-06 14:08:45.100781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.100790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.916 qpair failed and we were unable to recover it. 00:25:05.916 [2024-11-06 14:08:45.100956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.100965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.916 qpair failed and we were unable to recover it. 00:25:05.916 [2024-11-06 14:08:45.101251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.101262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.916 qpair failed and we were unable to recover it. 00:25:05.916 [2024-11-06 14:08:45.101595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.101605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.916 qpair failed and we were unable to recover it. 00:25:05.916 [2024-11-06 14:08:45.101903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.101913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.916 qpair failed and we were unable to recover it. 00:25:05.916 [2024-11-06 14:08:45.102099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.102109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.916 qpair failed and we were unable to recover it. 00:25:05.916 [2024-11-06 14:08:45.102433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.102443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.916 qpair failed and we were unable to recover it. 00:25:05.916 [2024-11-06 14:08:45.102744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.102753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.916 qpair failed and we were unable to recover it. 00:25:05.916 [2024-11-06 14:08:45.103044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.103053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.916 qpair failed and we were unable to recover it. 00:25:05.916 [2024-11-06 14:08:45.103426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.103436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.916 qpair failed and we were unable to recover it. 00:25:05.916 [2024-11-06 14:08:45.103694] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:05.916 [2024-11-06 14:08:45.103733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.103743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.916 qpair failed and we were unable to recover it. 00:25:05.916 [2024-11-06 14:08:45.104127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.104137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.916 qpair failed and we were unable to recover it. 00:25:05.916 [2024-11-06 14:08:45.104503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.104513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.916 qpair failed and we were unable to recover it. 00:25:05.916 [2024-11-06 14:08:45.104560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.104570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.916 qpair failed and we were unable to recover it. 00:25:05.916 [2024-11-06 14:08:45.104857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.104867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.916 qpair failed and we were unable to recover it. 00:25:05.916 [2024-11-06 14:08:45.105143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.105152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.916 qpair failed and we were unable to recover it. 00:25:05.916 [2024-11-06 14:08:45.105454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.105464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.916 qpair failed and we were unable to recover it. 00:25:05.916 [2024-11-06 14:08:45.105779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.105789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.916 qpair failed and we were unable to recover it. 00:25:05.916 [2024-11-06 14:08:45.106102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.106112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.916 qpair failed and we were unable to recover it. 00:25:05.916 [2024-11-06 14:08:45.106160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.106170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.916 qpair failed and we were unable to recover it. 00:25:05.916 [2024-11-06 14:08:45.106524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.106537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.916 qpair failed and we were unable to recover it. 00:25:05.916 [2024-11-06 14:08:45.106817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.106826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.916 qpair failed and we were unable to recover it. 00:25:05.916 [2024-11-06 14:08:45.107124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.107134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.916 qpair failed and we were unable to recover it. 00:25:05.916 [2024-11-06 14:08:45.107490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.107500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.916 qpair failed and we were unable to recover it. 00:25:05.916 [2024-11-06 14:08:45.107671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.107681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.916 qpair failed and we were unable to recover it. 00:25:05.916 [2024-11-06 14:08:45.107862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.107872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.916 qpair failed and we were unable to recover it. 00:25:05.916 [2024-11-06 14:08:45.108210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.108220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.916 qpair failed and we were unable to recover it. 00:25:05.916 [2024-11-06 14:08:45.108574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.108584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.916 qpair failed and we were unable to recover it. 00:25:05.916 [2024-11-06 14:08:45.108758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.108768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.916 qpair failed and we were unable to recover it. 00:25:05.916 14:08:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.916 [2024-11-06 14:08:45.109004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.916 [2024-11-06 14:08:45.109014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.917 qpair failed and we were unable to recover it. 00:25:05.917 14:08:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:05.917 14:08:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.917 14:08:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:05.917 [2024-11-06 14:08:45.109446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.917 [2024-11-06 14:08:45.109456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.917 qpair failed and we were unable to recover it. 00:25:05.917 [2024-11-06 14:08:45.109823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.917 [2024-11-06 14:08:45.109832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.917 qpair failed and we were unable to recover it. 00:25:05.917 [2024-11-06 14:08:45.110009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.917 [2024-11-06 14:08:45.110018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.917 qpair failed and we were unable to recover it. 00:25:05.917 [2024-11-06 14:08:45.110196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.917 [2024-11-06 14:08:45.110205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.917 qpair failed and we were unable to recover it. 00:25:05.917 [2024-11-06 14:08:45.110371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.917 [2024-11-06 14:08:45.110382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.917 qpair failed and we were unable to recover it. 00:25:05.917 [2024-11-06 14:08:45.110721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.917 [2024-11-06 14:08:45.110732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.917 qpair failed and we were unable to recover it. 00:25:05.917 [2024-11-06 14:08:45.111037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.917 [2024-11-06 14:08:45.111047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.917 qpair failed and we were unable to recover it. 00:25:05.917 [2024-11-06 14:08:45.111205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.917 [2024-11-06 14:08:45.111214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.917 qpair failed and we were unable to recover it. 00:25:05.917 [2024-11-06 14:08:45.111418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.917 [2024-11-06 14:08:45.111429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.917 qpair failed and we were unable to recover it. 00:25:05.917 [2024-11-06 14:08:45.111742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.917 [2024-11-06 14:08:45.111751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.917 qpair failed and we were unable to recover it. 00:25:05.917 [2024-11-06 14:08:45.112038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.917 [2024-11-06 14:08:45.112048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.917 qpair failed and we were unable to recover it. 00:25:05.917 [2024-11-06 14:08:45.112382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.917 [2024-11-06 14:08:45.112392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.917 qpair failed and we were unable to recover it. 00:25:05.917 [2024-11-06 14:08:45.112704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.917 [2024-11-06 14:08:45.112714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.917 qpair failed and we were unable to recover it. 00:25:05.917 [2024-11-06 14:08:45.113085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.917 [2024-11-06 14:08:45.113094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.917 qpair failed and we were unable to recover it. 00:25:05.917 [2024-11-06 14:08:45.113490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.917 [2024-11-06 14:08:45.113501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.917 qpair failed and we were unable to recover it. 00:25:05.917 [2024-11-06 14:08:45.113817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.917 [2024-11-06 14:08:45.113827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.917 qpair failed and we were unable to recover it. 00:25:05.917 [2024-11-06 14:08:45.114009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.917 [2024-11-06 14:08:45.114019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.917 qpair failed and we were unable to recover it. 00:25:05.917 [2024-11-06 14:08:45.114318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.917 [2024-11-06 14:08:45.114328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.917 qpair failed and we were unable to recover it. 00:25:05.917 [2024-11-06 14:08:45.114715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.917 [2024-11-06 14:08:45.114725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.917 qpair failed and we were unable to recover it. 00:25:05.917 [2024-11-06 14:08:45.114900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.917 [2024-11-06 14:08:45.114912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.917 qpair failed and we were unable to recover it. 00:25:05.917 [2024-11-06 14:08:45.115250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.917 [2024-11-06 14:08:45.115260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.917 qpair failed and we were unable to recover it. 00:25:05.917 [2024-11-06 14:08:45.115561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.917 [2024-11-06 14:08:45.115571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.917 qpair failed and we were unable to recover it. 00:25:05.917 [2024-11-06 14:08:45.115854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.917 [2024-11-06 14:08:45.115864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.917 qpair failed and we were unable to recover it. 00:25:05.917 [2024-11-06 14:08:45.116169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.917 [2024-11-06 14:08:45.116179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.917 qpair failed and we were unable to recover it. 00:25:05.917 [2024-11-06 14:08:45.116517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.917 [2024-11-06 14:08:45.116527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.917 qpair failed and we were unable to recover it. 00:25:05.917 14:08:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.917 [2024-11-06 14:08:45.116862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.917 [2024-11-06 14:08:45.116872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.917 qpair failed and we were unable to recover it. 00:25:05.917 14:08:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:05.917 14:08:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.917 14:08:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:05.917 [2024-11-06 14:08:45.117203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.917 [2024-11-06 14:08:45.117213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.917 qpair failed and we were unable to recover it. 00:25:05.917 [2024-11-06 14:08:45.117547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.917 [2024-11-06 14:08:45.117560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.917 qpair failed and we were unable to recover it. 00:25:05.917 [2024-11-06 14:08:45.117878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.917 [2024-11-06 14:08:45.117888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.917 qpair failed and we were unable to recover it. 00:25:05.917 [2024-11-06 14:08:45.118221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.917 [2024-11-06 14:08:45.118231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.917 qpair failed and we were unable to recover it. 00:25:05.917 [2024-11-06 14:08:45.118575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.917 [2024-11-06 14:08:45.118585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.917 qpair failed and we were unable to recover it. 00:25:05.917 [2024-11-06 14:08:45.118843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.917 [2024-11-06 14:08:45.118852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.917 qpair failed and we were unable to recover it. 00:25:05.917 [2024-11-06 14:08:45.118894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.917 [2024-11-06 14:08:45.118904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.917 qpair failed and we were unable to recover it. 00:25:05.917 [2024-11-06 14:08:45.119236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.917 [2024-11-06 14:08:45.119248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.917 qpair failed and we were unable to recover it. 00:25:05.917 [2024-11-06 14:08:45.119418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.917 [2024-11-06 14:08:45.119428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.917 qpair failed and we were unable to recover it. 00:25:05.917 [2024-11-06 14:08:45.119666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.918 [2024-11-06 14:08:45.119676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.918 qpair failed and we were unable to recover it. 00:25:05.918 [2024-11-06 14:08:45.120018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.918 [2024-11-06 14:08:45.120027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.918 qpair failed and we were unable to recover it. 00:25:05.918 [2024-11-06 14:08:45.120307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.918 [2024-11-06 14:08:45.120317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.918 qpair failed and we were unable to recover it. 00:25:05.918 [2024-11-06 14:08:45.120646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.918 [2024-11-06 14:08:45.120656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.918 qpair failed and we were unable to recover it. 00:25:05.918 [2024-11-06 14:08:45.120824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.918 [2024-11-06 14:08:45.120833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.918 qpair failed and we were unable to recover it. 00:25:05.918 [2024-11-06 14:08:45.121220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.918 [2024-11-06 14:08:45.121322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.918 qpair failed and we were unable to recover it. 00:25:05.918 [2024-11-06 14:08:45.121758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.918 [2024-11-06 14:08:45.121848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.918 qpair failed and we were unable to recover it. 00:25:05.918 [2024-11-06 14:08:45.122563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.918 [2024-11-06 14:08:45.122651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe318000b90 with addr=10.0.0.2, port=4420 00:25:05.918 qpair failed and we were unable to recover it. 00:25:05.918 [2024-11-06 14:08:45.122980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.918 [2024-11-06 14:08:45.122990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.918 qpair failed and we were unable to recover it. 00:25:05.918 [2024-11-06 14:08:45.123290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.918 [2024-11-06 14:08:45.123300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.918 qpair failed and we were unable to recover it. 00:25:05.918 [2024-11-06 14:08:45.123643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.918 [2024-11-06 14:08:45.123653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.918 qpair failed and we were unable to recover it. 00:25:05.918 [2024-11-06 14:08:45.123956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.918 [2024-11-06 14:08:45.123966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.918 qpair failed and we were unable to recover it. 00:25:05.918 [2024-11-06 14:08:45.124290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.918 [2024-11-06 14:08:45.124300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.918 qpair failed and we were unable to recover it. 00:25:05.918 [2024-11-06 14:08:45.124444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.918 [2024-11-06 14:08:45.124453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.918 qpair failed and we were unable to recover it. 00:25:05.918 [2024-11-06 14:08:45.124645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.918 [2024-11-06 14:08:45.124655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.918 qpair failed and we were unable to recover it. 00:25:05.918 14:08:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.918 14:08:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:05.918 [2024-11-06 14:08:45.124997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.918 [2024-11-06 14:08:45.125008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.918 qpair failed and we were unable to recover it. 00:25:05.918 14:08:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.918 14:08:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:05.918 [2024-11-06 14:08:45.125327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.918 [2024-11-06 14:08:45.125338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.918 qpair failed and we were unable to recover it. 00:25:05.918 [2024-11-06 14:08:45.125621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.918 [2024-11-06 14:08:45.125637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.918 qpair failed and we were unable to recover it. 00:25:05.918 [2024-11-06 14:08:45.125997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.918 [2024-11-06 14:08:45.126007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.918 qpair failed and we were unable to recover it. 00:25:05.918 [2024-11-06 14:08:45.126422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.918 [2024-11-06 14:08:45.126432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.918 qpair failed and we were unable to recover it. 00:25:05.918 [2024-11-06 14:08:45.126613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.918 [2024-11-06 14:08:45.126623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.918 qpair failed and we were unable to recover it. 00:25:05.918 [2024-11-06 14:08:45.126959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.918 [2024-11-06 14:08:45.126969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.918 qpair failed and we were unable to recover it. 00:25:05.918 [2024-11-06 14:08:45.127298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.918 [2024-11-06 14:08:45.127308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.918 qpair failed and we were unable to recover it. 00:25:05.918 [2024-11-06 14:08:45.127512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.918 [2024-11-06 14:08:45.127522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.918 qpair failed and we were unable to recover it. 00:25:05.918 [2024-11-06 14:08:45.127827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.918 [2024-11-06 14:08:45.127836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.918 qpair failed and we were unable to recover it. 00:25:05.918 [2024-11-06 14:08:45.128130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.918 [2024-11-06 14:08:45.128140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.918 qpair failed and we were unable to recover it. 00:25:05.918 [2024-11-06 14:08:45.128447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.918 [2024-11-06 14:08:45.128458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226b490 with addr=10.0.0.2, port=4420 00:25:05.918 qpair failed and we were unable to recover it. 00:25:05.918 [2024-11-06 14:08:45.128539] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:05.918 14:08:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.918 14:08:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:05.918 14:08:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.918 14:08:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:05.918 [2024-11-06 14:08:45.134336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.918 [2024-11-06 14:08:45.134398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.918 [2024-11-06 14:08:45.134416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.918 [2024-11-06 14:08:45.134427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.918 [2024-11-06 14:08:45.134434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:05.918 [2024-11-06 14:08:45.134454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.918 qpair failed and we were unable to recover it. 00:25:05.918 14:08:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.918 14:08:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1045776 00:25:05.918 [2024-11-06 14:08:45.144241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.918 [2024-11-06 14:08:45.144294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.918 [2024-11-06 14:08:45.144309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.918 [2024-11-06 14:08:45.144317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.918 [2024-11-06 14:08:45.144323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:05.918 [2024-11-06 14:08:45.144338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.918 qpair failed and we were unable to recover it. 00:25:05.918 [2024-11-06 14:08:45.154322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.918 [2024-11-06 14:08:45.154369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.918 [2024-11-06 14:08:45.154383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.919 [2024-11-06 14:08:45.154390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.919 [2024-11-06 14:08:45.154396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:05.919 [2024-11-06 14:08:45.154410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.919 qpair failed and we were unable to recover it. 00:25:06.180 [2024-11-06 14:08:45.164293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.180 [2024-11-06 14:08:45.164354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.180 [2024-11-06 14:08:45.164368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.180 [2024-11-06 14:08:45.164375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.180 [2024-11-06 14:08:45.164381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.180 [2024-11-06 14:08:45.164395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.180 qpair failed and we were unable to recover it. 00:25:06.180 [2024-11-06 14:08:45.174270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.180 [2024-11-06 14:08:45.174336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.180 [2024-11-06 14:08:45.174350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.180 [2024-11-06 14:08:45.174357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.180 [2024-11-06 14:08:45.174366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.180 [2024-11-06 14:08:45.174381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.180 qpair failed and we were unable to recover it. 00:25:06.181 [2024-11-06 14:08:45.184252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.181 [2024-11-06 14:08:45.184299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.181 [2024-11-06 14:08:45.184312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.181 [2024-11-06 14:08:45.184319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.181 [2024-11-06 14:08:45.184326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.181 [2024-11-06 14:08:45.184340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.181 qpair failed and we were unable to recover it. 00:25:06.181 [2024-11-06 14:08:45.194336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.181 [2024-11-06 14:08:45.194386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.181 [2024-11-06 14:08:45.194399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.181 [2024-11-06 14:08:45.194406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.181 [2024-11-06 14:08:45.194413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.181 [2024-11-06 14:08:45.194427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.181 qpair failed and we were unable to recover it. 00:25:06.181 [2024-11-06 14:08:45.204213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.181 [2024-11-06 14:08:45.204265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.181 [2024-11-06 14:08:45.204281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.181 [2024-11-06 14:08:45.204288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.181 [2024-11-06 14:08:45.204295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.181 [2024-11-06 14:08:45.204310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.181 qpair failed and we were unable to recover it. 00:25:06.181 [2024-11-06 14:08:45.214396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.181 [2024-11-06 14:08:45.214442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.181 [2024-11-06 14:08:45.214457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.181 [2024-11-06 14:08:45.214464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.181 [2024-11-06 14:08:45.214470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.181 [2024-11-06 14:08:45.214484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.181 qpair failed and we were unable to recover it. 00:25:06.181 [2024-11-06 14:08:45.224415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.181 [2024-11-06 14:08:45.224477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.181 [2024-11-06 14:08:45.224491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.181 [2024-11-06 14:08:45.224498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.181 [2024-11-06 14:08:45.224504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.181 [2024-11-06 14:08:45.224518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.181 qpair failed and we were unable to recover it. 00:25:06.181 [2024-11-06 14:08:45.234482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.181 [2024-11-06 14:08:45.234529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.181 [2024-11-06 14:08:45.234542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.181 [2024-11-06 14:08:45.234550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.181 [2024-11-06 14:08:45.234556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.181 [2024-11-06 14:08:45.234569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.181 qpair failed and we were unable to recover it. 00:25:06.181 [2024-11-06 14:08:45.244361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.181 [2024-11-06 14:08:45.244409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.181 [2024-11-06 14:08:45.244423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.181 [2024-11-06 14:08:45.244430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.181 [2024-11-06 14:08:45.244437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.181 [2024-11-06 14:08:45.244451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.181 qpair failed and we were unable to recover it. 00:25:06.181 [2024-11-06 14:08:45.254351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.181 [2024-11-06 14:08:45.254400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.181 [2024-11-06 14:08:45.254413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.181 [2024-11-06 14:08:45.254420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.181 [2024-11-06 14:08:45.254426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.181 [2024-11-06 14:08:45.254440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.181 qpair failed and we were unable to recover it. 00:25:06.181 [2024-11-06 14:08:45.264486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.181 [2024-11-06 14:08:45.264528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.181 [2024-11-06 14:08:45.264546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.181 [2024-11-06 14:08:45.264553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.181 [2024-11-06 14:08:45.264559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.181 [2024-11-06 14:08:45.264573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.181 qpair failed and we were unable to recover it. 00:25:06.181 [2024-11-06 14:08:45.274535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.181 [2024-11-06 14:08:45.274585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.181 [2024-11-06 14:08:45.274598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.181 [2024-11-06 14:08:45.274605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.181 [2024-11-06 14:08:45.274611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.181 [2024-11-06 14:08:45.274624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.181 qpair failed and we were unable to recover it. 00:25:06.181 [2024-11-06 14:08:45.284546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.181 [2024-11-06 14:08:45.284593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.181 [2024-11-06 14:08:45.284606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.181 [2024-11-06 14:08:45.284613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.181 [2024-11-06 14:08:45.284619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.181 [2024-11-06 14:08:45.284633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.181 qpair failed and we were unable to recover it. 00:25:06.181 [2024-11-06 14:08:45.294453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.181 [2024-11-06 14:08:45.294517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.181 [2024-11-06 14:08:45.294532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.182 [2024-11-06 14:08:45.294539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.182 [2024-11-06 14:08:45.294545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.182 [2024-11-06 14:08:45.294559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.182 qpair failed and we were unable to recover it. 00:25:06.182 [2024-11-06 14:08:45.304510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.182 [2024-11-06 14:08:45.304555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.182 [2024-11-06 14:08:45.304568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.182 [2024-11-06 14:08:45.304575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.182 [2024-11-06 14:08:45.304585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.182 [2024-11-06 14:08:45.304599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.182 qpair failed and we were unable to recover it. 00:25:06.182 [2024-11-06 14:08:45.314667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.182 [2024-11-06 14:08:45.314708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.182 [2024-11-06 14:08:45.314721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.182 [2024-11-06 14:08:45.314728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.182 [2024-11-06 14:08:45.314734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.182 [2024-11-06 14:08:45.314748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.182 qpair failed and we were unable to recover it. 00:25:06.182 [2024-11-06 14:08:45.324651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.182 [2024-11-06 14:08:45.324694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.182 [2024-11-06 14:08:45.324707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.182 [2024-11-06 14:08:45.324714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.182 [2024-11-06 14:08:45.324721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.182 [2024-11-06 14:08:45.324734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.182 qpair failed and we were unable to recover it. 00:25:06.182 [2024-11-06 14:08:45.334696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.182 [2024-11-06 14:08:45.334745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.182 [2024-11-06 14:08:45.334758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.182 [2024-11-06 14:08:45.334765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.182 [2024-11-06 14:08:45.334772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.182 [2024-11-06 14:08:45.334785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.182 qpair failed and we were unable to recover it. 00:25:06.182 [2024-11-06 14:08:45.344675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.182 [2024-11-06 14:08:45.344715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.182 [2024-11-06 14:08:45.344728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.182 [2024-11-06 14:08:45.344735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.182 [2024-11-06 14:08:45.344741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.182 [2024-11-06 14:08:45.344755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.182 qpair failed and we were unable to recover it. 00:25:06.182 [2024-11-06 14:08:45.354761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.182 [2024-11-06 14:08:45.354844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.182 [2024-11-06 14:08:45.354857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.182 [2024-11-06 14:08:45.354864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.182 [2024-11-06 14:08:45.354870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.182 [2024-11-06 14:08:45.354884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.182 qpair failed and we were unable to recover it. 00:25:06.182 [2024-11-06 14:08:45.364754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.182 [2024-11-06 14:08:45.364800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.182 [2024-11-06 14:08:45.364814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.182 [2024-11-06 14:08:45.364821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.182 [2024-11-06 14:08:45.364828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.182 [2024-11-06 14:08:45.364841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.182 qpair failed and we were unable to recover it. 00:25:06.182 [2024-11-06 14:08:45.374941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.182 [2024-11-06 14:08:45.374994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.182 [2024-11-06 14:08:45.375007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.182 [2024-11-06 14:08:45.375014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.182 [2024-11-06 14:08:45.375021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.182 [2024-11-06 14:08:45.375034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.182 qpair failed and we were unable to recover it. 00:25:06.182 [2024-11-06 14:08:45.384843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.182 [2024-11-06 14:08:45.384887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.182 [2024-11-06 14:08:45.384901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.182 [2024-11-06 14:08:45.384908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.182 [2024-11-06 14:08:45.384914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.182 [2024-11-06 14:08:45.384928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.182 qpair failed and we were unable to recover it. 00:25:06.182 [2024-11-06 14:08:45.394931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.182 [2024-11-06 14:08:45.394978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.182 [2024-11-06 14:08:45.394995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.182 [2024-11-06 14:08:45.395002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.182 [2024-11-06 14:08:45.395008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.182 [2024-11-06 14:08:45.395022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.182 qpair failed and we were unable to recover it. 00:25:06.182 [2024-11-06 14:08:45.404875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.182 [2024-11-06 14:08:45.404921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.182 [2024-11-06 14:08:45.404937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.182 [2024-11-06 14:08:45.404944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.182 [2024-11-06 14:08:45.404951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.182 [2024-11-06 14:08:45.404965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.182 qpair failed and we were unable to recover it. 00:25:06.182 [2024-11-06 14:08:45.414916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.182 [2024-11-06 14:08:45.414966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.183 [2024-11-06 14:08:45.414979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.183 [2024-11-06 14:08:45.414987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.183 [2024-11-06 14:08:45.414993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.183 [2024-11-06 14:08:45.415007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.183 qpair failed and we were unable to recover it. 00:25:06.183 [2024-11-06 14:08:45.424831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.183 [2024-11-06 14:08:45.424879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.183 [2024-11-06 14:08:45.424892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.183 [2024-11-06 14:08:45.424899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.183 [2024-11-06 14:08:45.424906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.183 [2024-11-06 14:08:45.424919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.183 qpair failed and we were unable to recover it. 00:25:06.183 [2024-11-06 14:08:45.434979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.183 [2024-11-06 14:08:45.435034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.183 [2024-11-06 14:08:45.435047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.183 [2024-11-06 14:08:45.435053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.183 [2024-11-06 14:08:45.435063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.183 [2024-11-06 14:08:45.435077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.183 qpair failed and we were unable to recover it. 00:25:06.183 [2024-11-06 14:08:45.444982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.183 [2024-11-06 14:08:45.445035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.183 [2024-11-06 14:08:45.445060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.183 [2024-11-06 14:08:45.445069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.183 [2024-11-06 14:08:45.445076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.183 [2024-11-06 14:08:45.445095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.183 qpair failed and we were unable to recover it. 00:25:06.183 [2024-11-06 14:08:45.455030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.183 [2024-11-06 14:08:45.455085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.183 [2024-11-06 14:08:45.455111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.183 [2024-11-06 14:08:45.455119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.183 [2024-11-06 14:08:45.455126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.183 [2024-11-06 14:08:45.455145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.183 qpair failed and we were unable to recover it. 00:25:06.445 [2024-11-06 14:08:45.464937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.445 [2024-11-06 14:08:45.464980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.445 [2024-11-06 14:08:45.464995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.445 [2024-11-06 14:08:45.465002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.445 [2024-11-06 14:08:45.465009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.445 [2024-11-06 14:08:45.465025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.445 qpair failed and we were unable to recover it. 00:25:06.445 [2024-11-06 14:08:45.475013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.445 [2024-11-06 14:08:45.475110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.445 [2024-11-06 14:08:45.475124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.445 [2024-11-06 14:08:45.475132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.445 [2024-11-06 14:08:45.475138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.445 [2024-11-06 14:08:45.475152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.445 qpair failed and we were unable to recover it. 00:25:06.445 [2024-11-06 14:08:45.485049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.445 [2024-11-06 14:08:45.485093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.445 [2024-11-06 14:08:45.485106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.445 [2024-11-06 14:08:45.485113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.445 [2024-11-06 14:08:45.485120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.445 [2024-11-06 14:08:45.485134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.445 qpair failed and we were unable to recover it. 00:25:06.445 [2024-11-06 14:08:45.495113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.445 [2024-11-06 14:08:45.495203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.445 [2024-11-06 14:08:45.495216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.445 [2024-11-06 14:08:45.495223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.445 [2024-11-06 14:08:45.495230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.445 [2024-11-06 14:08:45.495248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.445 qpair failed and we were unable to recover it. 00:25:06.445 [2024-11-06 14:08:45.505109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.445 [2024-11-06 14:08:45.505155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.445 [2024-11-06 14:08:45.505170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.445 [2024-11-06 14:08:45.505177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.445 [2024-11-06 14:08:45.505183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.445 [2024-11-06 14:08:45.505197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.445 qpair failed and we were unable to recover it. 00:25:06.445 [2024-11-06 14:08:45.515174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.445 [2024-11-06 14:08:45.515223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.445 [2024-11-06 14:08:45.515236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.445 [2024-11-06 14:08:45.515246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.445 [2024-11-06 14:08:45.515253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.445 [2024-11-06 14:08:45.515267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.445 qpair failed and we were unable to recover it. 00:25:06.445 [2024-11-06 14:08:45.525183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.445 [2024-11-06 14:08:45.525274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.445 [2024-11-06 14:08:45.525291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.445 [2024-11-06 14:08:45.525298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.445 [2024-11-06 14:08:45.525305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.445 [2024-11-06 14:08:45.525319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.445 qpair failed and we were unable to recover it. 00:25:06.445 [2024-11-06 14:08:45.535231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.445 [2024-11-06 14:08:45.535277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.445 [2024-11-06 14:08:45.535291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.445 [2024-11-06 14:08:45.535298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.445 [2024-11-06 14:08:45.535304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.445 [2024-11-06 14:08:45.535318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.445 qpair failed and we were unable to recover it. 00:25:06.445 [2024-11-06 14:08:45.545234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.445 [2024-11-06 14:08:45.545280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.445 [2024-11-06 14:08:45.545293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.445 [2024-11-06 14:08:45.545300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.445 [2024-11-06 14:08:45.545307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.445 [2024-11-06 14:08:45.545320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.445 qpair failed and we were unable to recover it. 00:25:06.445 [2024-11-06 14:08:45.555292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.446 [2024-11-06 14:08:45.555336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.446 [2024-11-06 14:08:45.555349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.446 [2024-11-06 14:08:45.555357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.446 [2024-11-06 14:08:45.555363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.446 [2024-11-06 14:08:45.555377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.446 qpair failed and we were unable to recover it. 00:25:06.446 [2024-11-06 14:08:45.565289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.446 [2024-11-06 14:08:45.565337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.446 [2024-11-06 14:08:45.565350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.446 [2024-11-06 14:08:45.565357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.446 [2024-11-06 14:08:45.565367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.446 [2024-11-06 14:08:45.565381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.446 qpair failed and we were unable to recover it. 00:25:06.446 [2024-11-06 14:08:45.575334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.446 [2024-11-06 14:08:45.575383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.446 [2024-11-06 14:08:45.575397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.446 [2024-11-06 14:08:45.575404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.446 [2024-11-06 14:08:45.575411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.446 [2024-11-06 14:08:45.575426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.446 qpair failed and we were unable to recover it. 00:25:06.446 [2024-11-06 14:08:45.585365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.446 [2024-11-06 14:08:45.585407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.446 [2024-11-06 14:08:45.585420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.446 [2024-11-06 14:08:45.585427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.446 [2024-11-06 14:08:45.585434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.446 [2024-11-06 14:08:45.585447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.446 qpair failed and we were unable to recover it. 00:25:06.446 [2024-11-06 14:08:45.595282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.446 [2024-11-06 14:08:45.595328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.446 [2024-11-06 14:08:45.595341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.446 [2024-11-06 14:08:45.595348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.446 [2024-11-06 14:08:45.595354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.446 [2024-11-06 14:08:45.595367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.446 qpair failed and we were unable to recover it. 00:25:06.446 [2024-11-06 14:08:45.605282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.446 [2024-11-06 14:08:45.605339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.446 [2024-11-06 14:08:45.605352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.446 [2024-11-06 14:08:45.605359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.446 [2024-11-06 14:08:45.605365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.446 [2024-11-06 14:08:45.605379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.446 qpair failed and we were unable to recover it. 00:25:06.446 [2024-11-06 14:08:45.615410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.446 [2024-11-06 14:08:45.615480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.446 [2024-11-06 14:08:45.615493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.446 [2024-11-06 14:08:45.615500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.446 [2024-11-06 14:08:45.615506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.446 [2024-11-06 14:08:45.615520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.446 qpair failed and we were unable to recover it. 00:25:06.446 [2024-11-06 14:08:45.625456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.446 [2024-11-06 14:08:45.625498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.446 [2024-11-06 14:08:45.625511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.446 [2024-11-06 14:08:45.625518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.446 [2024-11-06 14:08:45.625524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.446 [2024-11-06 14:08:45.625538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.446 qpair failed and we were unable to recover it. 00:25:06.446 [2024-11-06 14:08:45.635518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.446 [2024-11-06 14:08:45.635563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.446 [2024-11-06 14:08:45.635576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.446 [2024-11-06 14:08:45.635583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.446 [2024-11-06 14:08:45.635589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.446 [2024-11-06 14:08:45.635603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.446 qpair failed and we were unable to recover it. 00:25:06.446 [2024-11-06 14:08:45.645388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.446 [2024-11-06 14:08:45.645442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.446 [2024-11-06 14:08:45.645455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.446 [2024-11-06 14:08:45.645462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.446 [2024-11-06 14:08:45.645469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.446 [2024-11-06 14:08:45.645482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.446 qpair failed and we were unable to recover it. 00:25:06.446 [2024-11-06 14:08:45.655555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.446 [2024-11-06 14:08:45.655638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.446 [2024-11-06 14:08:45.655654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.446 [2024-11-06 14:08:45.655661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.446 [2024-11-06 14:08:45.655668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.446 [2024-11-06 14:08:45.655681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.446 qpair failed and we were unable to recover it. 00:25:06.446 [2024-11-06 14:08:45.665527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.446 [2024-11-06 14:08:45.665569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.446 [2024-11-06 14:08:45.665582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.446 [2024-11-06 14:08:45.665589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.446 [2024-11-06 14:08:45.665595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.446 [2024-11-06 14:08:45.665608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.446 qpair failed and we were unable to recover it. 00:25:06.446 [2024-11-06 14:08:45.675573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.446 [2024-11-06 14:08:45.675613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.446 [2024-11-06 14:08:45.675626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.446 [2024-11-06 14:08:45.675633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.446 [2024-11-06 14:08:45.675639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.446 [2024-11-06 14:08:45.675653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.446 qpair failed and we were unable to recover it. 00:25:06.446 [2024-11-06 14:08:45.685494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.446 [2024-11-06 14:08:45.685544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.447 [2024-11-06 14:08:45.685559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.447 [2024-11-06 14:08:45.685566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.447 [2024-11-06 14:08:45.685572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.447 [2024-11-06 14:08:45.685590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.447 qpair failed and we were unable to recover it. 00:25:06.447 [2024-11-06 14:08:45.695668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.447 [2024-11-06 14:08:45.695721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.447 [2024-11-06 14:08:45.695735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.447 [2024-11-06 14:08:45.695745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.447 [2024-11-06 14:08:45.695752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.447 [2024-11-06 14:08:45.695766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.447 qpair failed and we were unable to recover it. 00:25:06.447 [2024-11-06 14:08:45.705573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.447 [2024-11-06 14:08:45.705621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.447 [2024-11-06 14:08:45.705635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.447 [2024-11-06 14:08:45.705642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.447 [2024-11-06 14:08:45.705649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.447 [2024-11-06 14:08:45.705663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.447 qpair failed and we were unable to recover it. 00:25:06.447 [2024-11-06 14:08:45.715673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.447 [2024-11-06 14:08:45.715770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.447 [2024-11-06 14:08:45.715784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.447 [2024-11-06 14:08:45.715791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.447 [2024-11-06 14:08:45.715797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.447 [2024-11-06 14:08:45.715811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.447 qpair failed and we were unable to recover it. 00:25:06.447 [2024-11-06 14:08:45.725712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.447 [2024-11-06 14:08:45.725762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.447 [2024-11-06 14:08:45.725775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.447 [2024-11-06 14:08:45.725782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.447 [2024-11-06 14:08:45.725788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.447 [2024-11-06 14:08:45.725802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.447 qpair failed and we were unable to recover it. 00:25:06.708 [2024-11-06 14:08:45.735758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.708 [2024-11-06 14:08:45.735845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.708 [2024-11-06 14:08:45.735858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.708 [2024-11-06 14:08:45.735866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.708 [2024-11-06 14:08:45.735872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.708 [2024-11-06 14:08:45.735886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.708 qpair failed and we were unable to recover it. 00:25:06.708 [2024-11-06 14:08:45.745627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.708 [2024-11-06 14:08:45.745670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.708 [2024-11-06 14:08:45.745683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.708 [2024-11-06 14:08:45.745690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.708 [2024-11-06 14:08:45.745696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.708 [2024-11-06 14:08:45.745709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.708 qpair failed and we were unable to recover it. 00:25:06.708 [2024-11-06 14:08:45.755797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.708 [2024-11-06 14:08:45.755837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.708 [2024-11-06 14:08:45.755850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.708 [2024-11-06 14:08:45.755857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.708 [2024-11-06 14:08:45.755864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.708 [2024-11-06 14:08:45.755877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.708 qpair failed and we were unable to recover it. 00:25:06.708 [2024-11-06 14:08:45.765830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.708 [2024-11-06 14:08:45.765883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.708 [2024-11-06 14:08:45.765896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.708 [2024-11-06 14:08:45.765903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.708 [2024-11-06 14:08:45.765909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.708 [2024-11-06 14:08:45.765923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.708 qpair failed and we were unable to recover it. 00:25:06.708 [2024-11-06 14:08:45.775825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.708 [2024-11-06 14:08:45.775874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.708 [2024-11-06 14:08:45.775887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.708 [2024-11-06 14:08:45.775894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.708 [2024-11-06 14:08:45.775900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.708 [2024-11-06 14:08:45.775914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.708 qpair failed and we were unable to recover it. 00:25:06.708 [2024-11-06 14:08:45.785911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.708 [2024-11-06 14:08:45.785986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.708 [2024-11-06 14:08:45.786002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.708 [2024-11-06 14:08:45.786009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.708 [2024-11-06 14:08:45.786016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.708 [2024-11-06 14:08:45.786029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.708 qpair failed and we were unable to recover it. 00:25:06.708 [2024-11-06 14:08:45.795914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.708 [2024-11-06 14:08:45.795959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.708 [2024-11-06 14:08:45.795972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.708 [2024-11-06 14:08:45.795979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.708 [2024-11-06 14:08:45.795985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.708 [2024-11-06 14:08:45.795999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.708 qpair failed and we were unable to recover it. 00:25:06.708 [2024-11-06 14:08:45.805949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.708 [2024-11-06 14:08:45.805993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.708 [2024-11-06 14:08:45.806007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.708 [2024-11-06 14:08:45.806014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.708 [2024-11-06 14:08:45.806020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.708 [2024-11-06 14:08:45.806033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.708 qpair failed and we were unable to recover it. 00:25:06.708 [2024-11-06 14:08:45.815968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.708 [2024-11-06 14:08:45.816019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.708 [2024-11-06 14:08:45.816032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.708 [2024-11-06 14:08:45.816039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.708 [2024-11-06 14:08:45.816045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.708 [2024-11-06 14:08:45.816058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.708 qpair failed and we were unable to recover it. 00:25:06.708 [2024-11-06 14:08:45.825983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.708 [2024-11-06 14:08:45.826030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.708 [2024-11-06 14:08:45.826043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.708 [2024-11-06 14:08:45.826054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.708 [2024-11-06 14:08:45.826060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.708 [2024-11-06 14:08:45.826074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.709 qpair failed and we were unable to recover it. 00:25:06.709 [2024-11-06 14:08:45.836005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.709 [2024-11-06 14:08:45.836044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.709 [2024-11-06 14:08:45.836057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.709 [2024-11-06 14:08:45.836064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.709 [2024-11-06 14:08:45.836071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.709 [2024-11-06 14:08:45.836084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.709 qpair failed and we were unable to recover it. 00:25:06.709 [2024-11-06 14:08:45.846020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.709 [2024-11-06 14:08:45.846062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.709 [2024-11-06 14:08:45.846075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.709 [2024-11-06 14:08:45.846082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.709 [2024-11-06 14:08:45.846088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.709 [2024-11-06 14:08:45.846102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.709 qpair failed and we were unable to recover it. 00:25:06.709 [2024-11-06 14:08:45.856089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.709 [2024-11-06 14:08:45.856136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.709 [2024-11-06 14:08:45.856150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.709 [2024-11-06 14:08:45.856157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.709 [2024-11-06 14:08:45.856163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.709 [2024-11-06 14:08:45.856176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.709 qpair failed and we were unable to recover it. 00:25:06.709 [2024-11-06 14:08:45.865961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.709 [2024-11-06 14:08:45.866003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.709 [2024-11-06 14:08:45.866016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.709 [2024-11-06 14:08:45.866023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.709 [2024-11-06 14:08:45.866029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.709 [2024-11-06 14:08:45.866043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.709 qpair failed and we were unable to recover it. 00:25:06.709 [2024-11-06 14:08:45.876123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.709 [2024-11-06 14:08:45.876168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.709 [2024-11-06 14:08:45.876183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.709 [2024-11-06 14:08:45.876190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.709 [2024-11-06 14:08:45.876197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.709 [2024-11-06 14:08:45.876211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.709 qpair failed and we were unable to recover it. 00:25:06.709 [2024-11-06 14:08:45.886022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.709 [2024-11-06 14:08:45.886065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.709 [2024-11-06 14:08:45.886079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.709 [2024-11-06 14:08:45.886086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.709 [2024-11-06 14:08:45.886092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.709 [2024-11-06 14:08:45.886106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.709 qpair failed and we were unable to recover it. 00:25:06.709 [2024-11-06 14:08:45.896053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.709 [2024-11-06 14:08:45.896103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.709 [2024-11-06 14:08:45.896117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.709 [2024-11-06 14:08:45.896124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.709 [2024-11-06 14:08:45.896130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.709 [2024-11-06 14:08:45.896144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.709 qpair failed and we were unable to recover it. 00:25:06.709 [2024-11-06 14:08:45.906186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.709 [2024-11-06 14:08:45.906236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.709 [2024-11-06 14:08:45.906253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.709 [2024-11-06 14:08:45.906260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.709 [2024-11-06 14:08:45.906266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.709 [2024-11-06 14:08:45.906280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.709 qpair failed and we were unable to recover it. 00:25:06.709 [2024-11-06 14:08:45.916203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.709 [2024-11-06 14:08:45.916251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.709 [2024-11-06 14:08:45.916270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.709 [2024-11-06 14:08:45.916277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.709 [2024-11-06 14:08:45.916284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.709 [2024-11-06 14:08:45.916298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.709 qpair failed and we were unable to recover it. 00:25:06.709 [2024-11-06 14:08:45.926144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.709 [2024-11-06 14:08:45.926190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.709 [2024-11-06 14:08:45.926203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.709 [2024-11-06 14:08:45.926210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.709 [2024-11-06 14:08:45.926217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.709 [2024-11-06 14:08:45.926231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.709 qpair failed and we were unable to recover it. 00:25:06.709 [2024-11-06 14:08:45.936358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.709 [2024-11-06 14:08:45.936406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.709 [2024-11-06 14:08:45.936419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.709 [2024-11-06 14:08:45.936426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.709 [2024-11-06 14:08:45.936433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.709 [2024-11-06 14:08:45.936446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.709 qpair failed and we were unable to recover it. 00:25:06.709 [2024-11-06 14:08:45.946325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.709 [2024-11-06 14:08:45.946368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.709 [2024-11-06 14:08:45.946381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.709 [2024-11-06 14:08:45.946388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.709 [2024-11-06 14:08:45.946394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.709 [2024-11-06 14:08:45.946408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.709 qpair failed and we were unable to recover it. 00:25:06.709 [2024-11-06 14:08:45.956219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.709 [2024-11-06 14:08:45.956266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.709 [2024-11-06 14:08:45.956281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.709 [2024-11-06 14:08:45.956292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.709 [2024-11-06 14:08:45.956298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.709 [2024-11-06 14:08:45.956313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.709 qpair failed and we were unable to recover it. 00:25:06.709 [2024-11-06 14:08:45.966394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.709 [2024-11-06 14:08:45.966441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.709 [2024-11-06 14:08:45.966455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.709 [2024-11-06 14:08:45.966462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.710 [2024-11-06 14:08:45.966468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.710 [2024-11-06 14:08:45.966482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.710 qpair failed and we were unable to recover it. 00:25:06.710 [2024-11-06 14:08:45.976405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.710 [2024-11-06 14:08:45.976450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.710 [2024-11-06 14:08:45.976463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.710 [2024-11-06 14:08:45.976470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.710 [2024-11-06 14:08:45.976476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.710 [2024-11-06 14:08:45.976490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.710 qpair failed and we were unable to recover it. 00:25:06.710 [2024-11-06 14:08:45.986441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.710 [2024-11-06 14:08:45.986485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.710 [2024-11-06 14:08:45.986498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.710 [2024-11-06 14:08:45.986505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.710 [2024-11-06 14:08:45.986512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.710 [2024-11-06 14:08:45.986525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.710 qpair failed and we were unable to recover it. 00:25:06.970 [2024-11-06 14:08:45.996434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.970 [2024-11-06 14:08:45.996478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.970 [2024-11-06 14:08:45.996491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.970 [2024-11-06 14:08:45.996498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.970 [2024-11-06 14:08:45.996504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.970 [2024-11-06 14:08:45.996518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.970 qpair failed and we were unable to recover it. 00:25:06.970 [2024-11-06 14:08:46.006506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.970 [2024-11-06 14:08:46.006553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.970 [2024-11-06 14:08:46.006566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.970 [2024-11-06 14:08:46.006573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.970 [2024-11-06 14:08:46.006580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.970 [2024-11-06 14:08:46.006593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.970 qpair failed and we were unable to recover it. 00:25:06.970 [2024-11-06 14:08:46.016394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.970 [2024-11-06 14:08:46.016438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.970 [2024-11-06 14:08:46.016451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.970 [2024-11-06 14:08:46.016458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.970 [2024-11-06 14:08:46.016465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.970 [2024-11-06 14:08:46.016478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.970 qpair failed and we were unable to recover it. 00:25:06.970 [2024-11-06 14:08:46.026524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.970 [2024-11-06 14:08:46.026569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.970 [2024-11-06 14:08:46.026585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.970 [2024-11-06 14:08:46.026593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.970 [2024-11-06 14:08:46.026601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.970 [2024-11-06 14:08:46.026616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.970 qpair failed and we were unable to recover it. 00:25:06.970 [2024-11-06 14:08:46.036572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.970 [2024-11-06 14:08:46.036625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.970 [2024-11-06 14:08:46.036638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.970 [2024-11-06 14:08:46.036645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.970 [2024-11-06 14:08:46.036652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.970 [2024-11-06 14:08:46.036665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.970 qpair failed and we were unable to recover it. 00:25:06.970 [2024-11-06 14:08:46.046511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.970 [2024-11-06 14:08:46.046559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.970 [2024-11-06 14:08:46.046576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.970 [2024-11-06 14:08:46.046583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.970 [2024-11-06 14:08:46.046590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.970 [2024-11-06 14:08:46.046604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.970 qpair failed and we were unable to recover it. 00:25:06.970 [2024-11-06 14:08:46.056639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.970 [2024-11-06 14:08:46.056744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.970 [2024-11-06 14:08:46.056757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.970 [2024-11-06 14:08:46.056765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.970 [2024-11-06 14:08:46.056771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.970 [2024-11-06 14:08:46.056785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.970 qpair failed and we were unable to recover it. 00:25:06.970 [2024-11-06 14:08:46.066659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.970 [2024-11-06 14:08:46.066705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.970 [2024-11-06 14:08:46.066718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.970 [2024-11-06 14:08:46.066725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.970 [2024-11-06 14:08:46.066732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.970 [2024-11-06 14:08:46.066746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.970 qpair failed and we were unable to recover it. 00:25:06.970 [2024-11-06 14:08:46.076763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.970 [2024-11-06 14:08:46.076856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.970 [2024-11-06 14:08:46.076869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.970 [2024-11-06 14:08:46.076876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.970 [2024-11-06 14:08:46.076882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.970 [2024-11-06 14:08:46.076896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.970 qpair failed and we were unable to recover it. 00:25:06.971 [2024-11-06 14:08:46.086712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.971 [2024-11-06 14:08:46.086759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.971 [2024-11-06 14:08:46.086773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.971 [2024-11-06 14:08:46.086783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.971 [2024-11-06 14:08:46.086790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.971 [2024-11-06 14:08:46.086803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.971 qpair failed and we were unable to recover it. 00:25:06.971 [2024-11-06 14:08:46.096754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.971 [2024-11-06 14:08:46.096802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.971 [2024-11-06 14:08:46.096815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.971 [2024-11-06 14:08:46.096822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.971 [2024-11-06 14:08:46.096829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.971 [2024-11-06 14:08:46.096842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.971 qpair failed and we were unable to recover it. 00:25:06.971 [2024-11-06 14:08:46.106638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.971 [2024-11-06 14:08:46.106686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.971 [2024-11-06 14:08:46.106700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.971 [2024-11-06 14:08:46.106707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.971 [2024-11-06 14:08:46.106714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.971 [2024-11-06 14:08:46.106727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.971 qpair failed and we were unable to recover it. 00:25:06.971 [2024-11-06 14:08:46.116771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.971 [2024-11-06 14:08:46.116858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.971 [2024-11-06 14:08:46.116871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.971 [2024-11-06 14:08:46.116878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.971 [2024-11-06 14:08:46.116885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.971 [2024-11-06 14:08:46.116898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.971 qpair failed and we were unable to recover it. 00:25:06.971 [2024-11-06 14:08:46.126699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.971 [2024-11-06 14:08:46.126760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.971 [2024-11-06 14:08:46.126776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.971 [2024-11-06 14:08:46.126783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.971 [2024-11-06 14:08:46.126789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.971 [2024-11-06 14:08:46.126804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.971 qpair failed and we were unable to recover it. 00:25:06.971 [2024-11-06 14:08:46.136836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.971 [2024-11-06 14:08:46.136903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.971 [2024-11-06 14:08:46.136917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.971 [2024-11-06 14:08:46.136924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.971 [2024-11-06 14:08:46.136931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.971 [2024-11-06 14:08:46.136944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.971 qpair failed and we were unable to recover it. 00:25:06.971 [2024-11-06 14:08:46.146865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.971 [2024-11-06 14:08:46.146956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.971 [2024-11-06 14:08:46.146970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.971 [2024-11-06 14:08:46.146977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.971 [2024-11-06 14:08:46.146984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.971 [2024-11-06 14:08:46.146997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.971 qpair failed and we were unable to recover it. 00:25:06.971 [2024-11-06 14:08:46.156906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.971 [2024-11-06 14:08:46.156984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.971 [2024-11-06 14:08:46.156997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.971 [2024-11-06 14:08:46.157004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.971 [2024-11-06 14:08:46.157011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.971 [2024-11-06 14:08:46.157024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.971 qpair failed and we were unable to recover it. 00:25:06.971 [2024-11-06 14:08:46.166930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.971 [2024-11-06 14:08:46.166977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.971 [2024-11-06 14:08:46.166990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.971 [2024-11-06 14:08:46.166997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.971 [2024-11-06 14:08:46.167003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.971 [2024-11-06 14:08:46.167017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.971 qpair failed and we were unable to recover it. 00:25:06.971 [2024-11-06 14:08:46.176958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.971 [2024-11-06 14:08:46.177008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.971 [2024-11-06 14:08:46.177021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.971 [2024-11-06 14:08:46.177028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.971 [2024-11-06 14:08:46.177035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.971 [2024-11-06 14:08:46.177048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.971 qpair failed and we were unable to recover it. 00:25:06.971 [2024-11-06 14:08:46.186933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.971 [2024-11-06 14:08:46.186975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.971 [2024-11-06 14:08:46.186988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.971 [2024-11-06 14:08:46.186995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.971 [2024-11-06 14:08:46.187001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.971 [2024-11-06 14:08:46.187014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.971 qpair failed and we were unable to recover it. 00:25:06.971 [2024-11-06 14:08:46.197001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.971 [2024-11-06 14:08:46.197079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.971 [2024-11-06 14:08:46.197093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.971 [2024-11-06 14:08:46.197100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.971 [2024-11-06 14:08:46.197106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.971 [2024-11-06 14:08:46.197120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.971 qpair failed and we were unable to recover it. 00:25:06.971 [2024-11-06 14:08:46.207017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.971 [2024-11-06 14:08:46.207089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.971 [2024-11-06 14:08:46.207103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.971 [2024-11-06 14:08:46.207110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.971 [2024-11-06 14:08:46.207116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.971 [2024-11-06 14:08:46.207129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.971 qpair failed and we were unable to recover it. 00:25:06.971 [2024-11-06 14:08:46.217101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.972 [2024-11-06 14:08:46.217179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.972 [2024-11-06 14:08:46.217193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.972 [2024-11-06 14:08:46.217204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.972 [2024-11-06 14:08:46.217211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.972 [2024-11-06 14:08:46.217226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.972 qpair failed and we were unable to recover it. 00:25:06.972 [2024-11-06 14:08:46.227126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.972 [2024-11-06 14:08:46.227173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.972 [2024-11-06 14:08:46.227186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.972 [2024-11-06 14:08:46.227193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.972 [2024-11-06 14:08:46.227200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.972 [2024-11-06 14:08:46.227213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.972 qpair failed and we were unable to recover it. 00:25:06.972 [2024-11-06 14:08:46.237115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.972 [2024-11-06 14:08:46.237155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.972 [2024-11-06 14:08:46.237169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.972 [2024-11-06 14:08:46.237176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.972 [2024-11-06 14:08:46.237182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.972 [2024-11-06 14:08:46.237196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.972 qpair failed and we were unable to recover it. 00:25:06.972 [2024-11-06 14:08:46.247156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.972 [2024-11-06 14:08:46.247200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.972 [2024-11-06 14:08:46.247213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.972 [2024-11-06 14:08:46.247220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.972 [2024-11-06 14:08:46.247227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:06.972 [2024-11-06 14:08:46.247241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.972 qpair failed and we were unable to recover it. 00:25:07.234 [2024-11-06 14:08:46.257157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.234 [2024-11-06 14:08:46.257203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.234 [2024-11-06 14:08:46.257216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.234 [2024-11-06 14:08:46.257223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.234 [2024-11-06 14:08:46.257229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.234 [2024-11-06 14:08:46.257243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.234 qpair failed and we were unable to recover it. 00:25:07.234 [2024-11-06 14:08:46.267052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.234 [2024-11-06 14:08:46.267095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.234 [2024-11-06 14:08:46.267108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.234 [2024-11-06 14:08:46.267115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.234 [2024-11-06 14:08:46.267122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.234 [2024-11-06 14:08:46.267135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.234 qpair failed and we were unable to recover it. 00:25:07.234 [2024-11-06 14:08:46.277189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.234 [2024-11-06 14:08:46.277231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.234 [2024-11-06 14:08:46.277248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.234 [2024-11-06 14:08:46.277256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.234 [2024-11-06 14:08:46.277262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.234 [2024-11-06 14:08:46.277276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.234 qpair failed and we were unable to recover it. 00:25:07.234 [2024-11-06 14:08:46.287252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.234 [2024-11-06 14:08:46.287297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.234 [2024-11-06 14:08:46.287310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.234 [2024-11-06 14:08:46.287317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.234 [2024-11-06 14:08:46.287323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.234 [2024-11-06 14:08:46.287337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.234 qpair failed and we were unable to recover it. 00:25:07.234 [2024-11-06 14:08:46.297261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.234 [2024-11-06 14:08:46.297306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.234 [2024-11-06 14:08:46.297319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.234 [2024-11-06 14:08:46.297326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.234 [2024-11-06 14:08:46.297333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.234 [2024-11-06 14:08:46.297346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.234 qpair failed and we were unable to recover it. 00:25:07.234 [2024-11-06 14:08:46.307292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.234 [2024-11-06 14:08:46.307337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.234 [2024-11-06 14:08:46.307351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.234 [2024-11-06 14:08:46.307358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.234 [2024-11-06 14:08:46.307365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.234 [2024-11-06 14:08:46.307378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.234 qpair failed and we were unable to recover it. 00:25:07.234 [2024-11-06 14:08:46.317326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.234 [2024-11-06 14:08:46.317369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.234 [2024-11-06 14:08:46.317383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.234 [2024-11-06 14:08:46.317389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.234 [2024-11-06 14:08:46.317396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.234 [2024-11-06 14:08:46.317410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.234 qpair failed and we were unable to recover it. 00:25:07.234 [2024-11-06 14:08:46.327421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.234 [2024-11-06 14:08:46.327469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.234 [2024-11-06 14:08:46.327482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.234 [2024-11-06 14:08:46.327489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.234 [2024-11-06 14:08:46.327496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.234 [2024-11-06 14:08:46.327509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.234 qpair failed and we were unable to recover it. 00:25:07.234 [2024-11-06 14:08:46.337384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.234 [2024-11-06 14:08:46.337446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.234 [2024-11-06 14:08:46.337460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.234 [2024-11-06 14:08:46.337467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.234 [2024-11-06 14:08:46.337474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.234 [2024-11-06 14:08:46.337488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.234 qpair failed and we were unable to recover it. 00:25:07.234 [2024-11-06 14:08:46.347410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.234 [2024-11-06 14:08:46.347454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.234 [2024-11-06 14:08:46.347467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.234 [2024-11-06 14:08:46.347477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.234 [2024-11-06 14:08:46.347484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.234 [2024-11-06 14:08:46.347497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.234 qpair failed and we were unable to recover it. 00:25:07.234 [2024-11-06 14:08:46.357443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.234 [2024-11-06 14:08:46.357790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.234 [2024-11-06 14:08:46.357805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.234 [2024-11-06 14:08:46.357812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.234 [2024-11-06 14:08:46.357818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.234 [2024-11-06 14:08:46.357832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.234 qpair failed and we were unable to recover it. 00:25:07.234 [2024-11-06 14:08:46.367464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.234 [2024-11-06 14:08:46.367511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.234 [2024-11-06 14:08:46.367524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.234 [2024-11-06 14:08:46.367531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.234 [2024-11-06 14:08:46.367538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.234 [2024-11-06 14:08:46.367551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.234 qpair failed and we were unable to recover it. 00:25:07.234 [2024-11-06 14:08:46.377490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.235 [2024-11-06 14:08:46.377538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.235 [2024-11-06 14:08:46.377551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.235 [2024-11-06 14:08:46.377558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.235 [2024-11-06 14:08:46.377564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.235 [2024-11-06 14:08:46.377578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.235 qpair failed and we were unable to recover it. 00:25:07.235 [2024-11-06 14:08:46.387521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.235 [2024-11-06 14:08:46.387565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.235 [2024-11-06 14:08:46.387579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.235 [2024-11-06 14:08:46.387586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.235 [2024-11-06 14:08:46.387592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.235 [2024-11-06 14:08:46.387610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.235 qpair failed and we were unable to recover it. 00:25:07.235 [2024-11-06 14:08:46.397535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.235 [2024-11-06 14:08:46.397578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.235 [2024-11-06 14:08:46.397595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.235 [2024-11-06 14:08:46.397602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.235 [2024-11-06 14:08:46.397608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.235 [2024-11-06 14:08:46.397623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.235 qpair failed and we were unable to recover it. 00:25:07.235 [2024-11-06 14:08:46.407583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.235 [2024-11-06 14:08:46.407655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.235 [2024-11-06 14:08:46.407669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.235 [2024-11-06 14:08:46.407676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.235 [2024-11-06 14:08:46.407682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.235 [2024-11-06 14:08:46.407696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.235 qpair failed and we were unable to recover it. 00:25:07.235 [2024-11-06 14:08:46.417475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.235 [2024-11-06 14:08:46.417524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.235 [2024-11-06 14:08:46.417537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.235 [2024-11-06 14:08:46.417544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.235 [2024-11-06 14:08:46.417550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.235 [2024-11-06 14:08:46.417564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.235 qpair failed and we were unable to recover it. 00:25:07.235 [2024-11-06 14:08:46.427504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.235 [2024-11-06 14:08:46.427547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.235 [2024-11-06 14:08:46.427560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.235 [2024-11-06 14:08:46.427567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.235 [2024-11-06 14:08:46.427574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.235 [2024-11-06 14:08:46.427587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.235 qpair failed and we were unable to recover it. 00:25:07.235 [2024-11-06 14:08:46.437646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.235 [2024-11-06 14:08:46.437696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.235 [2024-11-06 14:08:46.437710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.235 [2024-11-06 14:08:46.437717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.235 [2024-11-06 14:08:46.437723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.235 [2024-11-06 14:08:46.437737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.235 qpair failed and we were unable to recover it. 00:25:07.235 [2024-11-06 14:08:46.447658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.235 [2024-11-06 14:08:46.447716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.235 [2024-11-06 14:08:46.447729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.235 [2024-11-06 14:08:46.447737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.235 [2024-11-06 14:08:46.447743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.235 [2024-11-06 14:08:46.447757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.235 qpair failed and we were unable to recover it. 00:25:07.235 [2024-11-06 14:08:46.457586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.235 [2024-11-06 14:08:46.457639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.235 [2024-11-06 14:08:46.457652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.235 [2024-11-06 14:08:46.457659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.235 [2024-11-06 14:08:46.457665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.235 [2024-11-06 14:08:46.457679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.235 qpair failed and we were unable to recover it. 00:25:07.235 [2024-11-06 14:08:46.467737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.235 [2024-11-06 14:08:46.467778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.235 [2024-11-06 14:08:46.467791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.235 [2024-11-06 14:08:46.467798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.235 [2024-11-06 14:08:46.467805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.235 [2024-11-06 14:08:46.467818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.235 qpair failed and we were unable to recover it. 00:25:07.235 [2024-11-06 14:08:46.477759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.235 [2024-11-06 14:08:46.477845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.235 [2024-11-06 14:08:46.477859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.235 [2024-11-06 14:08:46.477870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.235 [2024-11-06 14:08:46.477876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.235 [2024-11-06 14:08:46.477891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.235 qpair failed and we were unable to recover it. 00:25:07.235 [2024-11-06 14:08:46.487794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.235 [2024-11-06 14:08:46.487866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.235 [2024-11-06 14:08:46.487879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.235 [2024-11-06 14:08:46.487886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.235 [2024-11-06 14:08:46.487893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.235 [2024-11-06 14:08:46.487906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.235 qpair failed and we were unable to recover it. 00:25:07.235 [2024-11-06 14:08:46.497803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.235 [2024-11-06 14:08:46.497850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.235 [2024-11-06 14:08:46.497863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.235 [2024-11-06 14:08:46.497870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.235 [2024-11-06 14:08:46.497877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.235 [2024-11-06 14:08:46.497890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.235 qpair failed and we were unable to recover it. 00:25:07.235 [2024-11-06 14:08:46.507839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.235 [2024-11-06 14:08:46.507880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.236 [2024-11-06 14:08:46.507894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.236 [2024-11-06 14:08:46.507901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.236 [2024-11-06 14:08:46.507907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.236 [2024-11-06 14:08:46.507920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.236 qpair failed and we were unable to recover it. 00:25:07.497 [2024-11-06 14:08:46.517728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.497 [2024-11-06 14:08:46.517773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.497 [2024-11-06 14:08:46.517786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.497 [2024-11-06 14:08:46.517794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.497 [2024-11-06 14:08:46.517800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.497 [2024-11-06 14:08:46.517821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.497 qpair failed and we were unable to recover it. 00:25:07.497 [2024-11-06 14:08:46.527888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.497 [2024-11-06 14:08:46.527964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.497 [2024-11-06 14:08:46.527979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.497 [2024-11-06 14:08:46.527986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.497 [2024-11-06 14:08:46.527993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.497 [2024-11-06 14:08:46.528010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.497 qpair failed and we were unable to recover it. 00:25:07.497 [2024-11-06 14:08:46.537929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.497 [2024-11-06 14:08:46.537977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.497 [2024-11-06 14:08:46.537993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.497 [2024-11-06 14:08:46.538000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.497 [2024-11-06 14:08:46.538007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.497 [2024-11-06 14:08:46.538021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.497 qpair failed and we were unable to recover it. 00:25:07.497 [2024-11-06 14:08:46.547806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.497 [2024-11-06 14:08:46.547850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.497 [2024-11-06 14:08:46.547864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.497 [2024-11-06 14:08:46.547871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.497 [2024-11-06 14:08:46.547877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.497 [2024-11-06 14:08:46.547892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.497 qpair failed and we were unable to recover it. 00:25:07.497 [2024-11-06 14:08:46.557839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.497 [2024-11-06 14:08:46.557883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.497 [2024-11-06 14:08:46.557897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.497 [2024-11-06 14:08:46.557905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.497 [2024-11-06 14:08:46.557911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.497 [2024-11-06 14:08:46.557925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.497 qpair failed and we were unable to recover it. 00:25:07.497 [2024-11-06 14:08:46.567902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.497 [2024-11-06 14:08:46.567954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.497 [2024-11-06 14:08:46.567968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.497 [2024-11-06 14:08:46.567975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.497 [2024-11-06 14:08:46.567981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.497 [2024-11-06 14:08:46.567994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.497 qpair failed and we were unable to recover it. 00:25:07.497 [2024-11-06 14:08:46.577899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.497 [2024-11-06 14:08:46.577949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.497 [2024-11-06 14:08:46.577962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.497 [2024-11-06 14:08:46.577969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.497 [2024-11-06 14:08:46.577976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.497 [2024-11-06 14:08:46.577989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.497 qpair failed and we were unable to recover it. 00:25:07.497 [2024-11-06 14:08:46.588049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.497 [2024-11-06 14:08:46.588133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.497 [2024-11-06 14:08:46.588146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.497 [2024-11-06 14:08:46.588153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.497 [2024-11-06 14:08:46.588160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.497 [2024-11-06 14:08:46.588173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.497 qpair failed and we were unable to recover it. 00:25:07.497 [2024-11-06 14:08:46.598080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.497 [2024-11-06 14:08:46.598124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.497 [2024-11-06 14:08:46.598137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.497 [2024-11-06 14:08:46.598144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.497 [2024-11-06 14:08:46.598151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.497 [2024-11-06 14:08:46.598164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.497 qpair failed and we were unable to recover it. 00:25:07.497 [2024-11-06 14:08:46.608105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.498 [2024-11-06 14:08:46.608173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.498 [2024-11-06 14:08:46.608187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.498 [2024-11-06 14:08:46.608197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.498 [2024-11-06 14:08:46.608203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.498 [2024-11-06 14:08:46.608217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.498 qpair failed and we were unable to recover it. 00:25:07.498 [2024-11-06 14:08:46.618140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.498 [2024-11-06 14:08:46.618187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.498 [2024-11-06 14:08:46.618200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.498 [2024-11-06 14:08:46.618207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.498 [2024-11-06 14:08:46.618213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.498 [2024-11-06 14:08:46.618227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.498 qpair failed and we were unable to recover it. 00:25:07.498 [2024-11-06 14:08:46.628144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.498 [2024-11-06 14:08:46.628185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.498 [2024-11-06 14:08:46.628199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.498 [2024-11-06 14:08:46.628206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.498 [2024-11-06 14:08:46.628212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.498 [2024-11-06 14:08:46.628226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.498 qpair failed and we were unable to recover it. 00:25:07.498 [2024-11-06 14:08:46.638156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.498 [2024-11-06 14:08:46.638203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.498 [2024-11-06 14:08:46.638216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.498 [2024-11-06 14:08:46.638223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.498 [2024-11-06 14:08:46.638230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.498 [2024-11-06 14:08:46.638247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.498 qpair failed and we were unable to recover it. 00:25:07.498 [2024-11-06 14:08:46.648206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.498 [2024-11-06 14:08:46.648258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.498 [2024-11-06 14:08:46.648271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.498 [2024-11-06 14:08:46.648278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.498 [2024-11-06 14:08:46.648284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.498 [2024-11-06 14:08:46.648301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.498 qpair failed and we were unable to recover it. 00:25:07.498 [2024-11-06 14:08:46.658200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.498 [2024-11-06 14:08:46.658246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.498 [2024-11-06 14:08:46.658260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.498 [2024-11-06 14:08:46.658267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.498 [2024-11-06 14:08:46.658274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.498 [2024-11-06 14:08:46.658287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.498 qpair failed and we were unable to recover it. 00:25:07.498 [2024-11-06 14:08:46.668239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.498 [2024-11-06 14:08:46.668300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.498 [2024-11-06 14:08:46.668313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.498 [2024-11-06 14:08:46.668320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.498 [2024-11-06 14:08:46.668326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.498 [2024-11-06 14:08:46.668340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.498 qpair failed and we were unable to recover it. 00:25:07.498 [2024-11-06 14:08:46.678261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.498 [2024-11-06 14:08:46.678302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.498 [2024-11-06 14:08:46.678315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.498 [2024-11-06 14:08:46.678322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.498 [2024-11-06 14:08:46.678328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.498 [2024-11-06 14:08:46.678342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.498 qpair failed and we were unable to recover it. 00:25:07.498 [2024-11-06 14:08:46.688272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.498 [2024-11-06 14:08:46.688333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.498 [2024-11-06 14:08:46.688346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.498 [2024-11-06 14:08:46.688353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.498 [2024-11-06 14:08:46.688359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.498 [2024-11-06 14:08:46.688373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.498 qpair failed and we were unable to recover it. 00:25:07.498 [2024-11-06 14:08:46.698340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.498 [2024-11-06 14:08:46.698390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.498 [2024-11-06 14:08:46.698403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.498 [2024-11-06 14:08:46.698411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.498 [2024-11-06 14:08:46.698417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.498 [2024-11-06 14:08:46.698431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.498 qpair failed and we were unable to recover it. 00:25:07.498 [2024-11-06 14:08:46.708429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.498 [2024-11-06 14:08:46.708474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.498 [2024-11-06 14:08:46.708487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.498 [2024-11-06 14:08:46.708494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.498 [2024-11-06 14:08:46.708501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.498 [2024-11-06 14:08:46.708514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.498 qpair failed and we were unable to recover it. 00:25:07.498 [2024-11-06 14:08:46.718354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.498 [2024-11-06 14:08:46.718399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.498 [2024-11-06 14:08:46.718413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.498 [2024-11-06 14:08:46.718419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.498 [2024-11-06 14:08:46.718426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.498 [2024-11-06 14:08:46.718440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.498 qpair failed and we were unable to recover it. 00:25:07.498 [2024-11-06 14:08:46.728398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.498 [2024-11-06 14:08:46.728450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.498 [2024-11-06 14:08:46.728464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.498 [2024-11-06 14:08:46.728471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.498 [2024-11-06 14:08:46.728477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.498 [2024-11-06 14:08:46.728491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.498 qpair failed and we were unable to recover it. 00:25:07.498 [2024-11-06 14:08:46.738465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.499 [2024-11-06 14:08:46.738545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.499 [2024-11-06 14:08:46.738560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.499 [2024-11-06 14:08:46.738571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.499 [2024-11-06 14:08:46.738577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.499 [2024-11-06 14:08:46.738596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.499 qpair failed and we were unable to recover it. 00:25:07.499 [2024-11-06 14:08:46.748452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.499 [2024-11-06 14:08:46.748497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.499 [2024-11-06 14:08:46.748512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.499 [2024-11-06 14:08:46.748519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.499 [2024-11-06 14:08:46.748526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.499 [2024-11-06 14:08:46.748540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.499 qpair failed and we were unable to recover it. 00:25:07.499 [2024-11-06 14:08:46.758481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.499 [2024-11-06 14:08:46.758524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.499 [2024-11-06 14:08:46.758537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.499 [2024-11-06 14:08:46.758544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.499 [2024-11-06 14:08:46.758551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.499 [2024-11-06 14:08:46.758564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.499 qpair failed and we were unable to recover it. 00:25:07.499 [2024-11-06 14:08:46.768513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.499 [2024-11-06 14:08:46.768556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.499 [2024-11-06 14:08:46.768569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.499 [2024-11-06 14:08:46.768577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.499 [2024-11-06 14:08:46.768583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.499 [2024-11-06 14:08:46.768596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.499 qpair failed and we were unable to recover it. 00:25:07.499 [2024-11-06 14:08:46.778550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.499 [2024-11-06 14:08:46.778602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.499 [2024-11-06 14:08:46.778616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.499 [2024-11-06 14:08:46.778623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.499 [2024-11-06 14:08:46.778629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.499 [2024-11-06 14:08:46.778646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.499 qpair failed and we were unable to recover it. 00:25:07.760 [2024-11-06 14:08:46.788564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.760 [2024-11-06 14:08:46.788607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.760 [2024-11-06 14:08:46.788620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.760 [2024-11-06 14:08:46.788628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.760 [2024-11-06 14:08:46.788634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.760 [2024-11-06 14:08:46.788648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.760 qpair failed and we were unable to recover it. 00:25:07.760 [2024-11-06 14:08:46.798591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.760 [2024-11-06 14:08:46.798631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.760 [2024-11-06 14:08:46.798645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.760 [2024-11-06 14:08:46.798652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.760 [2024-11-06 14:08:46.798658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.760 [2024-11-06 14:08:46.798672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.760 qpair failed and we were unable to recover it. 00:25:07.760 [2024-11-06 14:08:46.808608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.760 [2024-11-06 14:08:46.808653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.760 [2024-11-06 14:08:46.808667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.760 [2024-11-06 14:08:46.808674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.760 [2024-11-06 14:08:46.808680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.760 [2024-11-06 14:08:46.808694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.760 qpair failed and we were unable to recover it. 00:25:07.760 [2024-11-06 14:08:46.818657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.760 [2024-11-06 14:08:46.818703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.760 [2024-11-06 14:08:46.818716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.760 [2024-11-06 14:08:46.818723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.760 [2024-11-06 14:08:46.818729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.760 [2024-11-06 14:08:46.818743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.760 qpair failed and we were unable to recover it. 00:25:07.760 [2024-11-06 14:08:46.828679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.760 [2024-11-06 14:08:46.828725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.760 [2024-11-06 14:08:46.828738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.760 [2024-11-06 14:08:46.828745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.760 [2024-11-06 14:08:46.828751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.760 [2024-11-06 14:08:46.828765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.760 qpair failed and we were unable to recover it. 00:25:07.760 [2024-11-06 14:08:46.838700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.760 [2024-11-06 14:08:46.838750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.760 [2024-11-06 14:08:46.838764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.760 [2024-11-06 14:08:46.838771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.760 [2024-11-06 14:08:46.838777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.760 [2024-11-06 14:08:46.838790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.760 qpair failed and we were unable to recover it. 00:25:07.761 [2024-11-06 14:08:46.848604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.761 [2024-11-06 14:08:46.848648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.761 [2024-11-06 14:08:46.848662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.761 [2024-11-06 14:08:46.848669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.761 [2024-11-06 14:08:46.848675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.761 [2024-11-06 14:08:46.848688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.761 qpair failed and we were unable to recover it. 00:25:07.761 [2024-11-06 14:08:46.858633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.761 [2024-11-06 14:08:46.858675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.761 [2024-11-06 14:08:46.858690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.761 [2024-11-06 14:08:46.858697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.761 [2024-11-06 14:08:46.858704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.761 [2024-11-06 14:08:46.858717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.761 qpair failed and we were unable to recover it. 00:25:07.761 [2024-11-06 14:08:46.868760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.761 [2024-11-06 14:08:46.868802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.761 [2024-11-06 14:08:46.868816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.761 [2024-11-06 14:08:46.868826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.761 [2024-11-06 14:08:46.868833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.761 [2024-11-06 14:08:46.868847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.761 qpair failed and we were unable to recover it. 00:25:07.761 [2024-11-06 14:08:46.878795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.761 [2024-11-06 14:08:46.878840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.761 [2024-11-06 14:08:46.878853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.761 [2024-11-06 14:08:46.878860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.761 [2024-11-06 14:08:46.878866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.761 [2024-11-06 14:08:46.878880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.761 qpair failed and we were unable to recover it. 00:25:07.761 [2024-11-06 14:08:46.888847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.761 [2024-11-06 14:08:46.888896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.761 [2024-11-06 14:08:46.888909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.761 [2024-11-06 14:08:46.888916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.761 [2024-11-06 14:08:46.888923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.761 [2024-11-06 14:08:46.888937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.761 qpair failed and we were unable to recover it. 00:25:07.761 [2024-11-06 14:08:46.898875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.761 [2024-11-06 14:08:46.898929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.761 [2024-11-06 14:08:46.898942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.761 [2024-11-06 14:08:46.898949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.761 [2024-11-06 14:08:46.898955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.761 [2024-11-06 14:08:46.898969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.761 qpair failed and we were unable to recover it. 00:25:07.761 [2024-11-06 14:08:46.908888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.761 [2024-11-06 14:08:46.908931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.761 [2024-11-06 14:08:46.908945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.761 [2024-11-06 14:08:46.908952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.761 [2024-11-06 14:08:46.908958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.761 [2024-11-06 14:08:46.908975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.761 qpair failed and we were unable to recover it. 00:25:07.761 [2024-11-06 14:08:46.918922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.761 [2024-11-06 14:08:46.918967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.761 [2024-11-06 14:08:46.918980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.761 [2024-11-06 14:08:46.918987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.761 [2024-11-06 14:08:46.918993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.761 [2024-11-06 14:08:46.919006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.761 qpair failed and we were unable to recover it. 00:25:07.761 [2024-11-06 14:08:46.928923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.761 [2024-11-06 14:08:46.928971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.761 [2024-11-06 14:08:46.928985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.761 [2024-11-06 14:08:46.928992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.761 [2024-11-06 14:08:46.928999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.761 [2024-11-06 14:08:46.929012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.761 qpair failed and we were unable to recover it. 00:25:07.761 [2024-11-06 14:08:46.939000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.761 [2024-11-06 14:08:46.939051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.761 [2024-11-06 14:08:46.939064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.761 [2024-11-06 14:08:46.939071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.761 [2024-11-06 14:08:46.939078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.761 [2024-11-06 14:08:46.939091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.761 qpair failed and we were unable to recover it. 00:25:07.761 [2024-11-06 14:08:46.948875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.761 [2024-11-06 14:08:46.948921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.761 [2024-11-06 14:08:46.948934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.761 [2024-11-06 14:08:46.948941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.761 [2024-11-06 14:08:46.948948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.761 [2024-11-06 14:08:46.948961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.761 qpair failed and we were unable to recover it. 00:25:07.761 [2024-11-06 14:08:46.959043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.761 [2024-11-06 14:08:46.959096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.761 [2024-11-06 14:08:46.959110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.761 [2024-11-06 14:08:46.959117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.761 [2024-11-06 14:08:46.959123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.761 [2024-11-06 14:08:46.959137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.761 qpair failed and we were unable to recover it. 00:25:07.761 [2024-11-06 14:08:46.969075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.761 [2024-11-06 14:08:46.969122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.761 [2024-11-06 14:08:46.969136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.761 [2024-11-06 14:08:46.969143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.761 [2024-11-06 14:08:46.969149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.761 [2024-11-06 14:08:46.969162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.761 qpair failed and we were unable to recover it. 00:25:07.761 [2024-11-06 14:08:46.979070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.762 [2024-11-06 14:08:46.979115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.762 [2024-11-06 14:08:46.979129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.762 [2024-11-06 14:08:46.979135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.762 [2024-11-06 14:08:46.979142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.762 [2024-11-06 14:08:46.979155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.762 qpair failed and we were unable to recover it. 00:25:07.762 [2024-11-06 14:08:46.989063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.762 [2024-11-06 14:08:46.989108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.762 [2024-11-06 14:08:46.989121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.762 [2024-11-06 14:08:46.989128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.762 [2024-11-06 14:08:46.989135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.762 [2024-11-06 14:08:46.989148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.762 qpair failed and we were unable to recover it. 00:25:07.762 [2024-11-06 14:08:46.999124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.762 [2024-11-06 14:08:46.999168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.762 [2024-11-06 14:08:46.999182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.762 [2024-11-06 14:08:46.999192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.762 [2024-11-06 14:08:46.999198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.762 [2024-11-06 14:08:46.999212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.762 qpair failed and we were unable to recover it. 00:25:07.762 [2024-11-06 14:08:47.009183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.762 [2024-11-06 14:08:47.009274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.762 [2024-11-06 14:08:47.009288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.762 [2024-11-06 14:08:47.009295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.762 [2024-11-06 14:08:47.009301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.762 [2024-11-06 14:08:47.009315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.762 qpair failed and we were unable to recover it. 00:25:07.762 [2024-11-06 14:08:47.019222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.762 [2024-11-06 14:08:47.019281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.762 [2024-11-06 14:08:47.019295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.762 [2024-11-06 14:08:47.019302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.762 [2024-11-06 14:08:47.019308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.762 [2024-11-06 14:08:47.019322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.762 qpair failed and we were unable to recover it. 00:25:07.762 [2024-11-06 14:08:47.029218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.762 [2024-11-06 14:08:47.029264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.762 [2024-11-06 14:08:47.029278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.762 [2024-11-06 14:08:47.029285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.762 [2024-11-06 14:08:47.029291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.762 [2024-11-06 14:08:47.029304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.762 qpair failed and we were unable to recover it. 00:25:07.762 [2024-11-06 14:08:47.039258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.762 [2024-11-06 14:08:47.039303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.762 [2024-11-06 14:08:47.039316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.762 [2024-11-06 14:08:47.039323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.762 [2024-11-06 14:08:47.039330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:07.762 [2024-11-06 14:08:47.039348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.762 qpair failed and we were unable to recover it. 00:25:08.024 [2024-11-06 14:08:47.049304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.024 [2024-11-06 14:08:47.049395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.024 [2024-11-06 14:08:47.049409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.024 [2024-11-06 14:08:47.049416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.024 [2024-11-06 14:08:47.049423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.024 [2024-11-06 14:08:47.049436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.024 qpair failed and we were unable to recover it. 00:25:08.024 [2024-11-06 14:08:47.059349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.024 [2024-11-06 14:08:47.059395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.024 [2024-11-06 14:08:47.059408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.024 [2024-11-06 14:08:47.059415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.024 [2024-11-06 14:08:47.059421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.024 [2024-11-06 14:08:47.059435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.024 qpair failed and we were unable to recover it. 00:25:08.024 [2024-11-06 14:08:47.069358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.024 [2024-11-06 14:08:47.069398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.024 [2024-11-06 14:08:47.069411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.024 [2024-11-06 14:08:47.069418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.024 [2024-11-06 14:08:47.069424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.024 [2024-11-06 14:08:47.069438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.024 qpair failed and we were unable to recover it. 00:25:08.024 [2024-11-06 14:08:47.079374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.024 [2024-11-06 14:08:47.079419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.024 [2024-11-06 14:08:47.079432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.024 [2024-11-06 14:08:47.079439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.024 [2024-11-06 14:08:47.079446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.024 [2024-11-06 14:08:47.079459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.024 qpair failed and we were unable to recover it. 00:25:08.024 [2024-11-06 14:08:47.089391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.024 [2024-11-06 14:08:47.089439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.024 [2024-11-06 14:08:47.089452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.024 [2024-11-06 14:08:47.089459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.024 [2024-11-06 14:08:47.089466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.024 [2024-11-06 14:08:47.089479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.024 qpair failed and we were unable to recover it. 00:25:08.024 [2024-11-06 14:08:47.099424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.024 [2024-11-06 14:08:47.099477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.024 [2024-11-06 14:08:47.099490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.024 [2024-11-06 14:08:47.099497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.024 [2024-11-06 14:08:47.099504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.024 [2024-11-06 14:08:47.099517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.024 qpair failed and we were unable to recover it. 00:25:08.024 [2024-11-06 14:08:47.109311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.024 [2024-11-06 14:08:47.109354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.024 [2024-11-06 14:08:47.109367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.024 [2024-11-06 14:08:47.109374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.024 [2024-11-06 14:08:47.109381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.024 [2024-11-06 14:08:47.109394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.024 qpair failed and we were unable to recover it. 00:25:08.024 [2024-11-06 14:08:47.119372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.024 [2024-11-06 14:08:47.119414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.024 [2024-11-06 14:08:47.119427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.024 [2024-11-06 14:08:47.119434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.024 [2024-11-06 14:08:47.119440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.024 [2024-11-06 14:08:47.119454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.024 qpair failed and we were unable to recover it. 00:25:08.024 [2024-11-06 14:08:47.129513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.024 [2024-11-06 14:08:47.129570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.024 [2024-11-06 14:08:47.129583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.024 [2024-11-06 14:08:47.129593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.024 [2024-11-06 14:08:47.129599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.024 [2024-11-06 14:08:47.129613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.024 qpair failed and we were unable to recover it. 00:25:08.024 [2024-11-06 14:08:47.139560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.025 [2024-11-06 14:08:47.139628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.025 [2024-11-06 14:08:47.139642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.025 [2024-11-06 14:08:47.139649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.025 [2024-11-06 14:08:47.139656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.025 [2024-11-06 14:08:47.139670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.025 qpair failed and we were unable to recover it. 00:25:08.025 [2024-11-06 14:08:47.149431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.025 [2024-11-06 14:08:47.149524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.025 [2024-11-06 14:08:47.149537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.025 [2024-11-06 14:08:47.149544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.025 [2024-11-06 14:08:47.149550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.025 [2024-11-06 14:08:47.149564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.025 qpair failed and we were unable to recover it. 00:25:08.025 [2024-11-06 14:08:47.159555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.025 [2024-11-06 14:08:47.159605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.025 [2024-11-06 14:08:47.159618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.025 [2024-11-06 14:08:47.159625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.025 [2024-11-06 14:08:47.159631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.025 [2024-11-06 14:08:47.159645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.025 qpair failed and we were unable to recover it. 00:25:08.025 [2024-11-06 14:08:47.169616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.025 [2024-11-06 14:08:47.169662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.025 [2024-11-06 14:08:47.169677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.025 [2024-11-06 14:08:47.169684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.025 [2024-11-06 14:08:47.169690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.025 [2024-11-06 14:08:47.169707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.025 qpair failed and we were unable to recover it. 00:25:08.025 [2024-11-06 14:08:47.179659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.025 [2024-11-06 14:08:47.179704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.025 [2024-11-06 14:08:47.179718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.025 [2024-11-06 14:08:47.179724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.025 [2024-11-06 14:08:47.179731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.025 [2024-11-06 14:08:47.179744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.025 qpair failed and we were unable to recover it. 00:25:08.025 [2024-11-06 14:08:47.189644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.025 [2024-11-06 14:08:47.189682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.025 [2024-11-06 14:08:47.189696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.025 [2024-11-06 14:08:47.189703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.025 [2024-11-06 14:08:47.189709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.025 [2024-11-06 14:08:47.189723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.025 qpair failed and we were unable to recover it. 00:25:08.025 [2024-11-06 14:08:47.199543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.025 [2024-11-06 14:08:47.199588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.025 [2024-11-06 14:08:47.199602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.025 [2024-11-06 14:08:47.199609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.025 [2024-11-06 14:08:47.199615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.025 [2024-11-06 14:08:47.199629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.025 qpair failed and we were unable to recover it. 00:25:08.025 [2024-11-06 14:08:47.209720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.025 [2024-11-06 14:08:47.209782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.025 [2024-11-06 14:08:47.209795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.025 [2024-11-06 14:08:47.209802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.025 [2024-11-06 14:08:47.209809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.025 [2024-11-06 14:08:47.209822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.025 qpair failed and we were unable to recover it. 00:25:08.025 [2024-11-06 14:08:47.219760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.025 [2024-11-06 14:08:47.219839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.025 [2024-11-06 14:08:47.219852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.025 [2024-11-06 14:08:47.219860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.025 [2024-11-06 14:08:47.219867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.025 [2024-11-06 14:08:47.219881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.025 qpair failed and we were unable to recover it. 00:25:08.025 [2024-11-06 14:08:47.229753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.025 [2024-11-06 14:08:47.229800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.025 [2024-11-06 14:08:47.229814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.025 [2024-11-06 14:08:47.229822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.025 [2024-11-06 14:08:47.229830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.025 [2024-11-06 14:08:47.229844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.025 qpair failed and we were unable to recover it. 00:25:08.025 [2024-11-06 14:08:47.239781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.025 [2024-11-06 14:08:47.239826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.025 [2024-11-06 14:08:47.239839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.025 [2024-11-06 14:08:47.239846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.025 [2024-11-06 14:08:47.239852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.025 [2024-11-06 14:08:47.239866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.025 qpair failed and we were unable to recover it. 00:25:08.025 [2024-11-06 14:08:47.249837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.025 [2024-11-06 14:08:47.249884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.025 [2024-11-06 14:08:47.249898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.025 [2024-11-06 14:08:47.249905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.025 [2024-11-06 14:08:47.249911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.025 [2024-11-06 14:08:47.249924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.025 qpair failed and we were unable to recover it. 00:25:08.025 [2024-11-06 14:08:47.259882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.025 [2024-11-06 14:08:47.259928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.025 [2024-11-06 14:08:47.259942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.025 [2024-11-06 14:08:47.259952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.025 [2024-11-06 14:08:47.259959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.025 [2024-11-06 14:08:47.259973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.025 qpair failed and we were unable to recover it. 00:25:08.025 [2024-11-06 14:08:47.269852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.025 [2024-11-06 14:08:47.269939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.025 [2024-11-06 14:08:47.269965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.026 [2024-11-06 14:08:47.269974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.026 [2024-11-06 14:08:47.269981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.026 [2024-11-06 14:08:47.270000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.026 qpair failed and we were unable to recover it. 00:25:08.026 [2024-11-06 14:08:47.279826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.026 [2024-11-06 14:08:47.279873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.026 [2024-11-06 14:08:47.279888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.026 [2024-11-06 14:08:47.279895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.026 [2024-11-06 14:08:47.279901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.026 [2024-11-06 14:08:47.279916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.026 qpair failed and we were unable to recover it. 00:25:08.026 [2024-11-06 14:08:47.289938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.026 [2024-11-06 14:08:47.289982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.026 [2024-11-06 14:08:47.289996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.026 [2024-11-06 14:08:47.290003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.026 [2024-11-06 14:08:47.290010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.026 [2024-11-06 14:08:47.290024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.026 qpair failed and we were unable to recover it. 00:25:08.026 [2024-11-06 14:08:47.299897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.026 [2024-11-06 14:08:47.299944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.026 [2024-11-06 14:08:47.299958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.026 [2024-11-06 14:08:47.299965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.026 [2024-11-06 14:08:47.299971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.026 [2024-11-06 14:08:47.299990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.026 qpair failed and we were unable to recover it. 00:25:08.288 [2024-11-06 14:08:47.309994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.288 [2024-11-06 14:08:47.310074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.288 [2024-11-06 14:08:47.310088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.288 [2024-11-06 14:08:47.310095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.288 [2024-11-06 14:08:47.310101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.288 [2024-11-06 14:08:47.310115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.288 qpair failed and we were unable to recover it. 00:25:08.288 [2024-11-06 14:08:47.319990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.288 [2024-11-06 14:08:47.320038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.288 [2024-11-06 14:08:47.320051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.288 [2024-11-06 14:08:47.320058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.288 [2024-11-06 14:08:47.320065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.288 [2024-11-06 14:08:47.320078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.288 qpair failed and we were unable to recover it. 00:25:08.288 [2024-11-06 14:08:47.330087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.288 [2024-11-06 14:08:47.330162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.288 [2024-11-06 14:08:47.330176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.288 [2024-11-06 14:08:47.330183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.288 [2024-11-06 14:08:47.330189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.288 [2024-11-06 14:08:47.330203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.288 qpair failed and we were unable to recover it. 00:25:08.288 [2024-11-06 14:08:47.340092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.288 [2024-11-06 14:08:47.340137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.288 [2024-11-06 14:08:47.340151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.288 [2024-11-06 14:08:47.340158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.288 [2024-11-06 14:08:47.340164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.288 [2024-11-06 14:08:47.340178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.288 qpair failed and we were unable to recover it. 00:25:08.288 [2024-11-06 14:08:47.349970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.288 [2024-11-06 14:08:47.350059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.288 [2024-11-06 14:08:47.350073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.288 [2024-11-06 14:08:47.350079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.288 [2024-11-06 14:08:47.350086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.288 [2024-11-06 14:08:47.350100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.288 qpair failed and we were unable to recover it. 00:25:08.288 [2024-11-06 14:08:47.359991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.288 [2024-11-06 14:08:47.360032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.288 [2024-11-06 14:08:47.360045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.288 [2024-11-06 14:08:47.360052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.288 [2024-11-06 14:08:47.360059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.288 [2024-11-06 14:08:47.360072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.288 qpair failed and we were unable to recover it. 00:25:08.288 [2024-11-06 14:08:47.370207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.288 [2024-11-06 14:08:47.370287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.288 [2024-11-06 14:08:47.370302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.288 [2024-11-06 14:08:47.370309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.288 [2024-11-06 14:08:47.370315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.288 [2024-11-06 14:08:47.370330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.288 qpair failed and we were unable to recover it. 00:25:08.288 [2024-11-06 14:08:47.380212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.288 [2024-11-06 14:08:47.380265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.289 [2024-11-06 14:08:47.380278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.289 [2024-11-06 14:08:47.380285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.289 [2024-11-06 14:08:47.380292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.289 [2024-11-06 14:08:47.380305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.289 qpair failed and we were unable to recover it. 00:25:08.289 [2024-11-06 14:08:47.390228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.289 [2024-11-06 14:08:47.390273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.289 [2024-11-06 14:08:47.390287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.289 [2024-11-06 14:08:47.390298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.289 [2024-11-06 14:08:47.390304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.289 [2024-11-06 14:08:47.390319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.289 qpair failed and we were unable to recover it. 00:25:08.289 [2024-11-06 14:08:47.400308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.289 [2024-11-06 14:08:47.400378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.289 [2024-11-06 14:08:47.400393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.289 [2024-11-06 14:08:47.400400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.289 [2024-11-06 14:08:47.400407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.289 [2024-11-06 14:08:47.400422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.289 qpair failed and we were unable to recover it. 00:25:08.289 [2024-11-06 14:08:47.410287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.289 [2024-11-06 14:08:47.410334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.289 [2024-11-06 14:08:47.410347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.289 [2024-11-06 14:08:47.410354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.289 [2024-11-06 14:08:47.410361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.289 [2024-11-06 14:08:47.410375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.289 qpair failed and we were unable to recover it. 00:25:08.289 [2024-11-06 14:08:47.420184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.289 [2024-11-06 14:08:47.420280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.289 [2024-11-06 14:08:47.420293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.289 [2024-11-06 14:08:47.420300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.289 [2024-11-06 14:08:47.420307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.289 [2024-11-06 14:08:47.420320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.289 qpair failed and we were unable to recover it. 00:25:08.289 [2024-11-06 14:08:47.430345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.289 [2024-11-06 14:08:47.430396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.289 [2024-11-06 14:08:47.430409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.289 [2024-11-06 14:08:47.430416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.289 [2024-11-06 14:08:47.430422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.289 [2024-11-06 14:08:47.430440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.289 qpair failed and we were unable to recover it. 00:25:08.289 [2024-11-06 14:08:47.440429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.289 [2024-11-06 14:08:47.440486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.289 [2024-11-06 14:08:47.440499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.289 [2024-11-06 14:08:47.440506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.289 [2024-11-06 14:08:47.440513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.289 [2024-11-06 14:08:47.440526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.289 qpair failed and we were unable to recover it. 00:25:08.289 [2024-11-06 14:08:47.450427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.289 [2024-11-06 14:08:47.450474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.289 [2024-11-06 14:08:47.450488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.289 [2024-11-06 14:08:47.450495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.289 [2024-11-06 14:08:47.450501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.289 [2024-11-06 14:08:47.450514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.289 qpair failed and we were unable to recover it. 00:25:08.289 [2024-11-06 14:08:47.460445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.289 [2024-11-06 14:08:47.460491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.289 [2024-11-06 14:08:47.460504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.289 [2024-11-06 14:08:47.460510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.289 [2024-11-06 14:08:47.460517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.289 [2024-11-06 14:08:47.460531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.289 qpair failed and we were unable to recover it. 00:25:08.289 [2024-11-06 14:08:47.470425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.289 [2024-11-06 14:08:47.470477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.289 [2024-11-06 14:08:47.470490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.289 [2024-11-06 14:08:47.470497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.289 [2024-11-06 14:08:47.470504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.289 [2024-11-06 14:08:47.470517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.289 qpair failed and we were unable to recover it. 00:25:08.289 [2024-11-06 14:08:47.480479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.289 [2024-11-06 14:08:47.480557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.289 [2024-11-06 14:08:47.480570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.289 [2024-11-06 14:08:47.480577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.289 [2024-11-06 14:08:47.480583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.289 [2024-11-06 14:08:47.480597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.289 qpair failed and we were unable to recover it. 00:25:08.289 [2024-11-06 14:08:47.490482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.289 [2024-11-06 14:08:47.490525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.289 [2024-11-06 14:08:47.490539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.289 [2024-11-06 14:08:47.490545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.289 [2024-11-06 14:08:47.490552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.289 [2024-11-06 14:08:47.490565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.289 qpair failed and we were unable to recover it. 00:25:08.289 [2024-11-06 14:08:47.500417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.289 [2024-11-06 14:08:47.500463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.289 [2024-11-06 14:08:47.500476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.290 [2024-11-06 14:08:47.500483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.290 [2024-11-06 14:08:47.500489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.290 [2024-11-06 14:08:47.500503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.290 qpair failed and we were unable to recover it. 00:25:08.290 [2024-11-06 14:08:47.510525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.290 [2024-11-06 14:08:47.510567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.290 [2024-11-06 14:08:47.510580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.290 [2024-11-06 14:08:47.510587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.290 [2024-11-06 14:08:47.510593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.290 [2024-11-06 14:08:47.510607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.290 qpair failed and we were unable to recover it. 00:25:08.290 [2024-11-06 14:08:47.520579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.290 [2024-11-06 14:08:47.520624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.290 [2024-11-06 14:08:47.520640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.290 [2024-11-06 14:08:47.520647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.290 [2024-11-06 14:08:47.520654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.290 [2024-11-06 14:08:47.520668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.290 qpair failed and we were unable to recover it. 00:25:08.290 [2024-11-06 14:08:47.530614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.290 [2024-11-06 14:08:47.530660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.290 [2024-11-06 14:08:47.530674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.290 [2024-11-06 14:08:47.530681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.290 [2024-11-06 14:08:47.530689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.290 [2024-11-06 14:08:47.530703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.290 qpair failed and we were unable to recover it. 00:25:08.290 [2024-11-06 14:08:47.540619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.290 [2024-11-06 14:08:47.540667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.290 [2024-11-06 14:08:47.540680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.290 [2024-11-06 14:08:47.540687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.290 [2024-11-06 14:08:47.540694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.290 [2024-11-06 14:08:47.540708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.290 qpair failed and we were unable to recover it. 00:25:08.290 [2024-11-06 14:08:47.550656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.290 [2024-11-06 14:08:47.550720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.290 [2024-11-06 14:08:47.550734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.290 [2024-11-06 14:08:47.550741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.290 [2024-11-06 14:08:47.550747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.290 [2024-11-06 14:08:47.550761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.290 qpair failed and we were unable to recover it. 00:25:08.290 [2024-11-06 14:08:47.560662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.290 [2024-11-06 14:08:47.560703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.290 [2024-11-06 14:08:47.560716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.290 [2024-11-06 14:08:47.560723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.290 [2024-11-06 14:08:47.560729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.290 [2024-11-06 14:08:47.560746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.290 qpair failed and we were unable to recover it. 00:25:08.290 [2024-11-06 14:08:47.570604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.290 [2024-11-06 14:08:47.570661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.290 [2024-11-06 14:08:47.570674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.290 [2024-11-06 14:08:47.570681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.290 [2024-11-06 14:08:47.570687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.290 [2024-11-06 14:08:47.570701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.290 qpair failed and we were unable to recover it. 00:25:08.551 [2024-11-06 14:08:47.580759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.551 [2024-11-06 14:08:47.580820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.551 [2024-11-06 14:08:47.580834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.551 [2024-11-06 14:08:47.580841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.551 [2024-11-06 14:08:47.580848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.551 [2024-11-06 14:08:47.580862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.551 qpair failed and we were unable to recover it. 00:25:08.551 [2024-11-06 14:08:47.590769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.551 [2024-11-06 14:08:47.590848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.551 [2024-11-06 14:08:47.590862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.551 [2024-11-06 14:08:47.590869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.551 [2024-11-06 14:08:47.590875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.551 [2024-11-06 14:08:47.590889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.551 qpair failed and we were unable to recover it. 00:25:08.551 [2024-11-06 14:08:47.600796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.551 [2024-11-06 14:08:47.600837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.551 [2024-11-06 14:08:47.600851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.551 [2024-11-06 14:08:47.600858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.551 [2024-11-06 14:08:47.600864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.551 [2024-11-06 14:08:47.600878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.551 qpair failed and we were unable to recover it. 00:25:08.551 [2024-11-06 14:08:47.610826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.551 [2024-11-06 14:08:47.610870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.551 [2024-11-06 14:08:47.610883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.551 [2024-11-06 14:08:47.610890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.551 [2024-11-06 14:08:47.610897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.551 [2024-11-06 14:08:47.610910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.551 qpair failed and we were unable to recover it. 00:25:08.551 [2024-11-06 14:08:47.620834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.551 [2024-11-06 14:08:47.620885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.551 [2024-11-06 14:08:47.620898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.551 [2024-11-06 14:08:47.620906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.551 [2024-11-06 14:08:47.620912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.551 [2024-11-06 14:08:47.620926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.551 qpair failed and we were unable to recover it. 00:25:08.551 [2024-11-06 14:08:47.630906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.551 [2024-11-06 14:08:47.630979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.551 [2024-11-06 14:08:47.630993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.551 [2024-11-06 14:08:47.631000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.551 [2024-11-06 14:08:47.631006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.551 [2024-11-06 14:08:47.631020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.551 qpair failed and we were unable to recover it. 00:25:08.551 [2024-11-06 14:08:47.640894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.551 [2024-11-06 14:08:47.640939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.551 [2024-11-06 14:08:47.640952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.551 [2024-11-06 14:08:47.640959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.551 [2024-11-06 14:08:47.640965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.551 [2024-11-06 14:08:47.640979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.551 qpair failed and we were unable to recover it. 00:25:08.551 [2024-11-06 14:08:47.650934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.551 [2024-11-06 14:08:47.650978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.552 [2024-11-06 14:08:47.650995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.552 [2024-11-06 14:08:47.651002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.552 [2024-11-06 14:08:47.651008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.552 [2024-11-06 14:08:47.651022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.552 qpair failed and we were unable to recover it. 00:25:08.552 [2024-11-06 14:08:47.660980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.552 [2024-11-06 14:08:47.661025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.552 [2024-11-06 14:08:47.661038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.552 [2024-11-06 14:08:47.661045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.552 [2024-11-06 14:08:47.661051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.552 [2024-11-06 14:08:47.661065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.552 qpair failed and we were unable to recover it. 00:25:08.552 [2024-11-06 14:08:47.670984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.552 [2024-11-06 14:08:47.671032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.552 [2024-11-06 14:08:47.671045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.552 [2024-11-06 14:08:47.671052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.552 [2024-11-06 14:08:47.671059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.552 [2024-11-06 14:08:47.671073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.552 qpair failed and we were unable to recover it. 00:25:08.552 [2024-11-06 14:08:47.681010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.552 [2024-11-06 14:08:47.681052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.552 [2024-11-06 14:08:47.681066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.552 [2024-11-06 14:08:47.681073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.552 [2024-11-06 14:08:47.681080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.552 [2024-11-06 14:08:47.681093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.552 qpair failed and we were unable to recover it. 00:25:08.552 [2024-11-06 14:08:47.691072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.552 [2024-11-06 14:08:47.691152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.552 [2024-11-06 14:08:47.691165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.552 [2024-11-06 14:08:47.691172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.552 [2024-11-06 14:08:47.691179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.552 [2024-11-06 14:08:47.691195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.552 qpair failed and we were unable to recover it. 00:25:08.552 [2024-11-06 14:08:47.701091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.552 [2024-11-06 14:08:47.701136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.552 [2024-11-06 14:08:47.701149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.552 [2024-11-06 14:08:47.701156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.552 [2024-11-06 14:08:47.701163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.552 [2024-11-06 14:08:47.701176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.552 qpair failed and we were unable to recover it. 00:25:08.552 [2024-11-06 14:08:47.711103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.552 [2024-11-06 14:08:47.711146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.552 [2024-11-06 14:08:47.711159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.552 [2024-11-06 14:08:47.711167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.552 [2024-11-06 14:08:47.711173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.552 [2024-11-06 14:08:47.711187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.552 qpair failed and we were unable to recover it. 00:25:08.552 [2024-11-06 14:08:47.721119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.552 [2024-11-06 14:08:47.721169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.552 [2024-11-06 14:08:47.721182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.552 [2024-11-06 14:08:47.721189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.552 [2024-11-06 14:08:47.721195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.552 [2024-11-06 14:08:47.721209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.552 qpair failed and we were unable to recover it. 00:25:08.552 [2024-11-06 14:08:47.731128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.552 [2024-11-06 14:08:47.731219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.552 [2024-11-06 14:08:47.731233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.552 [2024-11-06 14:08:47.731240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.552 [2024-11-06 14:08:47.731250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.552 [2024-11-06 14:08:47.731264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.552 qpair failed and we were unable to recover it. 00:25:08.552 [2024-11-06 14:08:47.741189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.552 [2024-11-06 14:08:47.741235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.552 [2024-11-06 14:08:47.741253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.552 [2024-11-06 14:08:47.741260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.552 [2024-11-06 14:08:47.741267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.552 [2024-11-06 14:08:47.741280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.552 qpair failed and we were unable to recover it. 00:25:08.552 [2024-11-06 14:08:47.751202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.552 [2024-11-06 14:08:47.751251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.552 [2024-11-06 14:08:47.751265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.552 [2024-11-06 14:08:47.751272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.552 [2024-11-06 14:08:47.751278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.552 [2024-11-06 14:08:47.751292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.552 qpair failed and we were unable to recover it. 00:25:08.552 [2024-11-06 14:08:47.761218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.552 [2024-11-06 14:08:47.761302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.552 [2024-11-06 14:08:47.761315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.552 [2024-11-06 14:08:47.761322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.552 [2024-11-06 14:08:47.761329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.552 [2024-11-06 14:08:47.761342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.552 qpair failed and we were unable to recover it. 00:25:08.552 [2024-11-06 14:08:47.771242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.552 [2024-11-06 14:08:47.771298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.552 [2024-11-06 14:08:47.771312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.552 [2024-11-06 14:08:47.771319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.552 [2024-11-06 14:08:47.771325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.552 [2024-11-06 14:08:47.771339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.552 qpair failed and we were unable to recover it. 00:25:08.552 [2024-11-06 14:08:47.781292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.552 [2024-11-06 14:08:47.781347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.552 [2024-11-06 14:08:47.781363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.552 [2024-11-06 14:08:47.781371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.553 [2024-11-06 14:08:47.781377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.553 [2024-11-06 14:08:47.781391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.553 qpair failed and we were unable to recover it. 00:25:08.553 [2024-11-06 14:08:47.791300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.553 [2024-11-06 14:08:47.791343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.553 [2024-11-06 14:08:47.791358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.553 [2024-11-06 14:08:47.791365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.553 [2024-11-06 14:08:47.791371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.553 [2024-11-06 14:08:47.791385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.553 qpair failed and we were unable to recover it. 00:25:08.553 [2024-11-06 14:08:47.801328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.553 [2024-11-06 14:08:47.801391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.553 [2024-11-06 14:08:47.801405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.553 [2024-11-06 14:08:47.801412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.553 [2024-11-06 14:08:47.801418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.553 [2024-11-06 14:08:47.801432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.553 qpair failed and we were unable to recover it. 00:25:08.553 [2024-11-06 14:08:47.811352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.553 [2024-11-06 14:08:47.811423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.553 [2024-11-06 14:08:47.811437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.553 [2024-11-06 14:08:47.811444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.553 [2024-11-06 14:08:47.811450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.553 [2024-11-06 14:08:47.811464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.553 qpair failed and we were unable to recover it. 00:25:08.553 [2024-11-06 14:08:47.821380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.553 [2024-11-06 14:08:47.821429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.553 [2024-11-06 14:08:47.821442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.553 [2024-11-06 14:08:47.821449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.553 [2024-11-06 14:08:47.821456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.553 [2024-11-06 14:08:47.821473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.553 qpair failed and we were unable to recover it. 00:25:08.553 [2024-11-06 14:08:47.831386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.553 [2024-11-06 14:08:47.831428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.553 [2024-11-06 14:08:47.831441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.553 [2024-11-06 14:08:47.831448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.553 [2024-11-06 14:08:47.831455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.553 [2024-11-06 14:08:47.831468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.553 qpair failed and we were unable to recover it. 00:25:08.814 [2024-11-06 14:08:47.841443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.814 [2024-11-06 14:08:47.841484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.814 [2024-11-06 14:08:47.841497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.814 [2024-11-06 14:08:47.841504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.814 [2024-11-06 14:08:47.841510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.814 [2024-11-06 14:08:47.841524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.814 qpair failed and we were unable to recover it. 00:25:08.814 [2024-11-06 14:08:47.851466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.814 [2024-11-06 14:08:47.851521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.814 [2024-11-06 14:08:47.851534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.814 [2024-11-06 14:08:47.851541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.814 [2024-11-06 14:08:47.851548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.814 [2024-11-06 14:08:47.851561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.814 qpair failed and we were unable to recover it. 00:25:08.814 [2024-11-06 14:08:47.861390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.814 [2024-11-06 14:08:47.861434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.814 [2024-11-06 14:08:47.861449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.814 [2024-11-06 14:08:47.861456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.814 [2024-11-06 14:08:47.861463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.814 [2024-11-06 14:08:47.861477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.814 qpair failed and we were unable to recover it. 00:25:08.814 [2024-11-06 14:08:47.871563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.814 [2024-11-06 14:08:47.871618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.814 [2024-11-06 14:08:47.871633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.814 [2024-11-06 14:08:47.871640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.814 [2024-11-06 14:08:47.871646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.814 [2024-11-06 14:08:47.871660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.814 qpair failed and we were unable to recover it. 00:25:08.814 [2024-11-06 14:08:47.881564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.814 [2024-11-06 14:08:47.881611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.814 [2024-11-06 14:08:47.881625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.814 [2024-11-06 14:08:47.881632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.814 [2024-11-06 14:08:47.881638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.814 [2024-11-06 14:08:47.881651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.814 qpair failed and we were unable to recover it. 00:25:08.814 [2024-11-06 14:08:47.891590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.814 [2024-11-06 14:08:47.891640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.814 [2024-11-06 14:08:47.891654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.814 [2024-11-06 14:08:47.891661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.814 [2024-11-06 14:08:47.891667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.814 [2024-11-06 14:08:47.891681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.814 qpair failed and we were unable to recover it. 00:25:08.814 [2024-11-06 14:08:47.901480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.814 [2024-11-06 14:08:47.901526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.814 [2024-11-06 14:08:47.901539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.814 [2024-11-06 14:08:47.901546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.814 [2024-11-06 14:08:47.901553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.814 [2024-11-06 14:08:47.901566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.814 qpair failed and we were unable to recover it. 00:25:08.814 [2024-11-06 14:08:47.911637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.814 [2024-11-06 14:08:47.911685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.814 [2024-11-06 14:08:47.911702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.814 [2024-11-06 14:08:47.911709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.814 [2024-11-06 14:08:47.911715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.814 [2024-11-06 14:08:47.911729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.814 qpair failed and we were unable to recover it. 00:25:08.814 [2024-11-06 14:08:47.921520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.814 [2024-11-06 14:08:47.921569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.814 [2024-11-06 14:08:47.921582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.814 [2024-11-06 14:08:47.921589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.815 [2024-11-06 14:08:47.921596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.815 [2024-11-06 14:08:47.921609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.815 qpair failed and we were unable to recover it. 00:25:08.815 [2024-11-06 14:08:47.931658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.815 [2024-11-06 14:08:47.931700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.815 [2024-11-06 14:08:47.931713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.815 [2024-11-06 14:08:47.931720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.815 [2024-11-06 14:08:47.931727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.815 [2024-11-06 14:08:47.931740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.815 qpair failed and we were unable to recover it. 00:25:08.815 [2024-11-06 14:08:47.941719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.815 [2024-11-06 14:08:47.941767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.815 [2024-11-06 14:08:47.941780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.815 [2024-11-06 14:08:47.941787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.815 [2024-11-06 14:08:47.941794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.815 [2024-11-06 14:08:47.941807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.815 qpair failed and we were unable to recover it. 00:25:08.815 [2024-11-06 14:08:47.951714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.815 [2024-11-06 14:08:47.951766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.815 [2024-11-06 14:08:47.951779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.815 [2024-11-06 14:08:47.951786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.815 [2024-11-06 14:08:47.951792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.815 [2024-11-06 14:08:47.951808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.815 qpair failed and we were unable to recover it. 00:25:08.815 [2024-11-06 14:08:47.961767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.815 [2024-11-06 14:08:47.961863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.815 [2024-11-06 14:08:47.961876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.815 [2024-11-06 14:08:47.961884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.815 [2024-11-06 14:08:47.961890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.815 [2024-11-06 14:08:47.961903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.815 qpair failed and we were unable to recover it. 00:25:08.815 [2024-11-06 14:08:47.971789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.815 [2024-11-06 14:08:47.971836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.815 [2024-11-06 14:08:47.971849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.815 [2024-11-06 14:08:47.971856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.815 [2024-11-06 14:08:47.971862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.815 [2024-11-06 14:08:47.971876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.815 qpair failed and we were unable to recover it. 00:25:08.815 [2024-11-06 14:08:47.981852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.815 [2024-11-06 14:08:47.981900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.815 [2024-11-06 14:08:47.981914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.815 [2024-11-06 14:08:47.981921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.815 [2024-11-06 14:08:47.981927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.815 [2024-11-06 14:08:47.981941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.815 qpair failed and we were unable to recover it. 00:25:08.815 [2024-11-06 14:08:47.991821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.815 [2024-11-06 14:08:47.991862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.815 [2024-11-06 14:08:47.991875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.815 [2024-11-06 14:08:47.991882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.815 [2024-11-06 14:08:47.991888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.815 [2024-11-06 14:08:47.991903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.815 qpair failed and we were unable to recover it. 00:25:08.815 [2024-11-06 14:08:48.001869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.815 [2024-11-06 14:08:48.001919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.815 [2024-11-06 14:08:48.001933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.815 [2024-11-06 14:08:48.001940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.815 [2024-11-06 14:08:48.001947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.815 [2024-11-06 14:08:48.001961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.815 qpair failed and we were unable to recover it. 00:25:08.815 [2024-11-06 14:08:48.011807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.815 [2024-11-06 14:08:48.011854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.815 [2024-11-06 14:08:48.011869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.815 [2024-11-06 14:08:48.011876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.815 [2024-11-06 14:08:48.011882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.815 [2024-11-06 14:08:48.011896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.815 qpair failed and we were unable to recover it. 00:25:08.815 [2024-11-06 14:08:48.021932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.815 [2024-11-06 14:08:48.021983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.815 [2024-11-06 14:08:48.021996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.815 [2024-11-06 14:08:48.022003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.815 [2024-11-06 14:08:48.022010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.815 [2024-11-06 14:08:48.022024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.815 qpair failed and we were unable to recover it. 00:25:08.815 [2024-11-06 14:08:48.031939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.815 [2024-11-06 14:08:48.031986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.815 [2024-11-06 14:08:48.032012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.815 [2024-11-06 14:08:48.032021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.815 [2024-11-06 14:08:48.032028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.815 [2024-11-06 14:08:48.032047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.815 qpair failed and we were unable to recover it. 00:25:08.815 [2024-11-06 14:08:48.041845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.815 [2024-11-06 14:08:48.041887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.815 [2024-11-06 14:08:48.041906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.815 [2024-11-06 14:08:48.041914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.815 [2024-11-06 14:08:48.041921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.815 [2024-11-06 14:08:48.041936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.815 qpair failed and we were unable to recover it. 00:25:08.815 [2024-11-06 14:08:48.051994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.815 [2024-11-06 14:08:48.052041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.815 [2024-11-06 14:08:48.052055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.815 [2024-11-06 14:08:48.052062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.815 [2024-11-06 14:08:48.052069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.816 [2024-11-06 14:08:48.052083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.816 qpair failed and we were unable to recover it. 00:25:08.816 [2024-11-06 14:08:48.062004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.816 [2024-11-06 14:08:48.062070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.816 [2024-11-06 14:08:48.062095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.816 [2024-11-06 14:08:48.062104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.816 [2024-11-06 14:08:48.062111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.816 [2024-11-06 14:08:48.062131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.816 qpair failed and we were unable to recover it. 00:25:08.816 [2024-11-06 14:08:48.072073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.816 [2024-11-06 14:08:48.072165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.816 [2024-11-06 14:08:48.072181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.816 [2024-11-06 14:08:48.072188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.816 [2024-11-06 14:08:48.072195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.816 [2024-11-06 14:08:48.072210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.816 qpair failed and we were unable to recover it. 00:25:08.816 [2024-11-06 14:08:48.082083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.816 [2024-11-06 14:08:48.082126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.816 [2024-11-06 14:08:48.082140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.816 [2024-11-06 14:08:48.082147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.816 [2024-11-06 14:08:48.082158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.816 [2024-11-06 14:08:48.082173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.816 qpair failed and we were unable to recover it. 00:25:08.816 [2024-11-06 14:08:48.092102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.816 [2024-11-06 14:08:48.092145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.816 [2024-11-06 14:08:48.092159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.816 [2024-11-06 14:08:48.092166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.816 [2024-11-06 14:08:48.092172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:08.816 [2024-11-06 14:08:48.092186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.816 qpair failed and we were unable to recover it. 00:25:09.077 [2024-11-06 14:08:48.102139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.077 [2024-11-06 14:08:48.102188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.077 [2024-11-06 14:08:48.102201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.077 [2024-11-06 14:08:48.102209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.077 [2024-11-06 14:08:48.102215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.077 [2024-11-06 14:08:48.102229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.077 qpair failed and we were unable to recover it. 00:25:09.077 [2024-11-06 14:08:48.112155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.077 [2024-11-06 14:08:48.112233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.077 [2024-11-06 14:08:48.112251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.077 [2024-11-06 14:08:48.112259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.077 [2024-11-06 14:08:48.112266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.077 [2024-11-06 14:08:48.112280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.077 qpair failed and we were unable to recover it. 00:25:09.077 [2024-11-06 14:08:48.122206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.077 [2024-11-06 14:08:48.122257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.077 [2024-11-06 14:08:48.122271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.077 [2024-11-06 14:08:48.122278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.077 [2024-11-06 14:08:48.122284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.077 [2024-11-06 14:08:48.122298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.077 qpair failed and we were unable to recover it. 00:25:09.077 [2024-11-06 14:08:48.132225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.077 [2024-11-06 14:08:48.132273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.077 [2024-11-06 14:08:48.132287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.077 [2024-11-06 14:08:48.132294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.077 [2024-11-06 14:08:48.132300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.077 [2024-11-06 14:08:48.132315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.077 qpair failed and we were unable to recover it. 00:25:09.077 [2024-11-06 14:08:48.142250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.077 [2024-11-06 14:08:48.142296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.077 [2024-11-06 14:08:48.142309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.077 [2024-11-06 14:08:48.142316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.077 [2024-11-06 14:08:48.142322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.077 [2024-11-06 14:08:48.142336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.077 qpair failed and we were unable to recover it. 00:25:09.077 [2024-11-06 14:08:48.152310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.077 [2024-11-06 14:08:48.152372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.077 [2024-11-06 14:08:48.152385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.077 [2024-11-06 14:08:48.152392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.077 [2024-11-06 14:08:48.152398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.077 [2024-11-06 14:08:48.152412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.077 qpair failed and we were unable to recover it. 00:25:09.077 [2024-11-06 14:08:48.162935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.077 [2024-11-06 14:08:48.163018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.077 [2024-11-06 14:08:48.163032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.077 [2024-11-06 14:08:48.163039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.077 [2024-11-06 14:08:48.163046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.077 [2024-11-06 14:08:48.163059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.077 qpair failed and we were unable to recover it. 00:25:09.077 [2024-11-06 14:08:48.172326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.077 [2024-11-06 14:08:48.172393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.077 [2024-11-06 14:08:48.172410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.078 [2024-11-06 14:08:48.172417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.078 [2024-11-06 14:08:48.172424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.078 [2024-11-06 14:08:48.172438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.078 qpair failed and we were unable to recover it. 00:25:09.078 [2024-11-06 14:08:48.182241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.078 [2024-11-06 14:08:48.182296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.078 [2024-11-06 14:08:48.182309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.078 [2024-11-06 14:08:48.182316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.078 [2024-11-06 14:08:48.182322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.078 [2024-11-06 14:08:48.182336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.078 qpair failed and we were unable to recover it. 00:25:09.078 [2024-11-06 14:08:48.192388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.078 [2024-11-06 14:08:48.192437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.078 [2024-11-06 14:08:48.192450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.078 [2024-11-06 14:08:48.192457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.078 [2024-11-06 14:08:48.192463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.078 [2024-11-06 14:08:48.192477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.078 qpair failed and we were unable to recover it. 00:25:09.078 [2024-11-06 14:08:48.202273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.078 [2024-11-06 14:08:48.202316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.078 [2024-11-06 14:08:48.202331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.078 [2024-11-06 14:08:48.202338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.078 [2024-11-06 14:08:48.202344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.078 [2024-11-06 14:08:48.202359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.078 qpair failed and we were unable to recover it. 00:25:09.078 [2024-11-06 14:08:48.212420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.078 [2024-11-06 14:08:48.212463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.078 [2024-11-06 14:08:48.212476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.078 [2024-11-06 14:08:48.212483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.078 [2024-11-06 14:08:48.212493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.078 [2024-11-06 14:08:48.212507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.078 qpair failed and we were unable to recover it. 00:25:09.078 [2024-11-06 14:08:48.222479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.078 [2024-11-06 14:08:48.222565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.078 [2024-11-06 14:08:48.222578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.078 [2024-11-06 14:08:48.222585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.078 [2024-11-06 14:08:48.222591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.078 [2024-11-06 14:08:48.222605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.078 qpair failed and we were unable to recover it. 00:25:09.078 [2024-11-06 14:08:48.232488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.078 [2024-11-06 14:08:48.232532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.078 [2024-11-06 14:08:48.232545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.078 [2024-11-06 14:08:48.232552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.078 [2024-11-06 14:08:48.232559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.078 [2024-11-06 14:08:48.232573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.078 qpair failed and we were unable to recover it. 00:25:09.078 [2024-11-06 14:08:48.242494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.078 [2024-11-06 14:08:48.242539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.078 [2024-11-06 14:08:48.242552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.078 [2024-11-06 14:08:48.242559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.078 [2024-11-06 14:08:48.242565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.078 [2024-11-06 14:08:48.242579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.078 qpair failed and we were unable to recover it. 00:25:09.078 [2024-11-06 14:08:48.252515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.078 [2024-11-06 14:08:48.252601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.078 [2024-11-06 14:08:48.252615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.078 [2024-11-06 14:08:48.252624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.078 [2024-11-06 14:08:48.252632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.078 [2024-11-06 14:08:48.252646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.078 qpair failed and we were unable to recover it. 00:25:09.078 [2024-11-06 14:08:48.262553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.078 [2024-11-06 14:08:48.262627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.078 [2024-11-06 14:08:48.262640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.078 [2024-11-06 14:08:48.262647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.078 [2024-11-06 14:08:48.262653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.078 [2024-11-06 14:08:48.262667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.078 qpair failed and we were unable to recover it. 00:25:09.078 [2024-11-06 14:08:48.272631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.078 [2024-11-06 14:08:48.272702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.078 [2024-11-06 14:08:48.272715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.078 [2024-11-06 14:08:48.272722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.078 [2024-11-06 14:08:48.272728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.078 [2024-11-06 14:08:48.272742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.078 qpair failed and we were unable to recover it. 00:25:09.078 [2024-11-06 14:08:48.282579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.078 [2024-11-06 14:08:48.282662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.078 [2024-11-06 14:08:48.282676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.078 [2024-11-06 14:08:48.282683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.078 [2024-11-06 14:08:48.282689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.078 [2024-11-06 14:08:48.282702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.078 qpair failed and we were unable to recover it. 00:25:09.078 [2024-11-06 14:08:48.292508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.078 [2024-11-06 14:08:48.292553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.078 [2024-11-06 14:08:48.292566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.078 [2024-11-06 14:08:48.292573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.078 [2024-11-06 14:08:48.292580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.078 [2024-11-06 14:08:48.292593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.078 qpair failed and we were unable to recover it. 00:25:09.078 [2024-11-06 14:08:48.302658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.078 [2024-11-06 14:08:48.302699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.078 [2024-11-06 14:08:48.302716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.078 [2024-11-06 14:08:48.302723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.079 [2024-11-06 14:08:48.302730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.079 [2024-11-06 14:08:48.302743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.079 qpair failed and we were unable to recover it. 00:25:09.079 [2024-11-06 14:08:48.312683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.079 [2024-11-06 14:08:48.312739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.079 [2024-11-06 14:08:48.312753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.079 [2024-11-06 14:08:48.312760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.079 [2024-11-06 14:08:48.312766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.079 [2024-11-06 14:08:48.312780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.079 qpair failed and we were unable to recover it. 00:25:09.079 [2024-11-06 14:08:48.322601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.079 [2024-11-06 14:08:48.322640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.079 [2024-11-06 14:08:48.322654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.079 [2024-11-06 14:08:48.322661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.079 [2024-11-06 14:08:48.322667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.079 [2024-11-06 14:08:48.322680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.079 qpair failed and we were unable to recover it. 00:25:09.079 [2024-11-06 14:08:48.332723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.079 [2024-11-06 14:08:48.332768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.079 [2024-11-06 14:08:48.332781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.079 [2024-11-06 14:08:48.332788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.079 [2024-11-06 14:08:48.332794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.079 [2024-11-06 14:08:48.332807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.079 qpair failed and we were unable to recover it. 00:25:09.079 [2024-11-06 14:08:48.342818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.079 [2024-11-06 14:08:48.342869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.079 [2024-11-06 14:08:48.342882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.079 [2024-11-06 14:08:48.342889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.079 [2024-11-06 14:08:48.342899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.079 [2024-11-06 14:08:48.342912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.079 qpair failed and we were unable to recover it. 00:25:09.079 [2024-11-06 14:08:48.352820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.079 [2024-11-06 14:08:48.352867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.079 [2024-11-06 14:08:48.352880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.079 [2024-11-06 14:08:48.352887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.079 [2024-11-06 14:08:48.352894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.079 [2024-11-06 14:08:48.352907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.079 qpair failed and we were unable to recover it. 00:25:09.340 [2024-11-06 14:08:48.362822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.340 [2024-11-06 14:08:48.362870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.340 [2024-11-06 14:08:48.362883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.340 [2024-11-06 14:08:48.362890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.340 [2024-11-06 14:08:48.362897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.340 [2024-11-06 14:08:48.362910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.340 qpair failed and we were unable to recover it. 00:25:09.340 [2024-11-06 14:08:48.372859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.340 [2024-11-06 14:08:48.372914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.341 [2024-11-06 14:08:48.372939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.341 [2024-11-06 14:08:48.372948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.341 [2024-11-06 14:08:48.372955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.341 [2024-11-06 14:08:48.372974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.341 qpair failed and we were unable to recover it. 00:25:09.341 [2024-11-06 14:08:48.382892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.341 [2024-11-06 14:08:48.382943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.341 [2024-11-06 14:08:48.382959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.341 [2024-11-06 14:08:48.382966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.341 [2024-11-06 14:08:48.382973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.341 [2024-11-06 14:08:48.382987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.341 qpair failed and we were unable to recover it. 00:25:09.341 [2024-11-06 14:08:48.392901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.341 [2024-11-06 14:08:48.392947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.341 [2024-11-06 14:08:48.392961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.341 [2024-11-06 14:08:48.392968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.341 [2024-11-06 14:08:48.392975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.341 [2024-11-06 14:08:48.392989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.341 qpair failed and we were unable to recover it. 00:25:09.341 [2024-11-06 14:08:48.402836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.341 [2024-11-06 14:08:48.402881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.341 [2024-11-06 14:08:48.402897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.341 [2024-11-06 14:08:48.402904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.341 [2024-11-06 14:08:48.402910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.341 [2024-11-06 14:08:48.402925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.341 qpair failed and we were unable to recover it. 00:25:09.341 [2024-11-06 14:08:48.412965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.341 [2024-11-06 14:08:48.413059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.341 [2024-11-06 14:08:48.413074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.341 [2024-11-06 14:08:48.413081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.341 [2024-11-06 14:08:48.413087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.341 [2024-11-06 14:08:48.413102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.341 qpair failed and we were unable to recover it. 00:25:09.341 [2024-11-06 14:08:48.423002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.341 [2024-11-06 14:08:48.423054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.341 [2024-11-06 14:08:48.423080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.341 [2024-11-06 14:08:48.423089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.341 [2024-11-06 14:08:48.423096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.341 [2024-11-06 14:08:48.423115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.341 qpair failed and we were unable to recover it. 00:25:09.341 [2024-11-06 14:08:48.433020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.341 [2024-11-06 14:08:48.433064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.341 [2024-11-06 14:08:48.433083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.341 [2024-11-06 14:08:48.433091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.341 [2024-11-06 14:08:48.433097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.341 [2024-11-06 14:08:48.433112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.341 qpair failed and we were unable to recover it. 00:25:09.341 [2024-11-06 14:08:48.443030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.341 [2024-11-06 14:08:48.443129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.341 [2024-11-06 14:08:48.443143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.341 [2024-11-06 14:08:48.443150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.341 [2024-11-06 14:08:48.443157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.341 [2024-11-06 14:08:48.443171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.341 qpair failed and we were unable to recover it. 00:25:09.341 [2024-11-06 14:08:48.453057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.341 [2024-11-06 14:08:48.453105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.341 [2024-11-06 14:08:48.453118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.341 [2024-11-06 14:08:48.453125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.341 [2024-11-06 14:08:48.453132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.341 [2024-11-06 14:08:48.453146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.341 qpair failed and we were unable to recover it. 00:25:09.341 [2024-11-06 14:08:48.463109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.341 [2024-11-06 14:08:48.463192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.341 [2024-11-06 14:08:48.463205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.341 [2024-11-06 14:08:48.463212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.341 [2024-11-06 14:08:48.463219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.341 [2024-11-06 14:08:48.463232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.341 qpair failed and we were unable to recover it. 00:25:09.341 [2024-11-06 14:08:48.473136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.341 [2024-11-06 14:08:48.473178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.341 [2024-11-06 14:08:48.473191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.341 [2024-11-06 14:08:48.473198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.341 [2024-11-06 14:08:48.473208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.341 [2024-11-06 14:08:48.473223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.341 qpair failed and we were unable to recover it. 00:25:09.341 [2024-11-06 14:08:48.483174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.341 [2024-11-06 14:08:48.483267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.341 [2024-11-06 14:08:48.483280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.341 [2024-11-06 14:08:48.483287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.341 [2024-11-06 14:08:48.483294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.341 [2024-11-06 14:08:48.483308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.341 qpair failed and we were unable to recover it. 00:25:09.341 [2024-11-06 14:08:48.493149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.341 [2024-11-06 14:08:48.493193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.341 [2024-11-06 14:08:48.493206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.341 [2024-11-06 14:08:48.493213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.341 [2024-11-06 14:08:48.493220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.341 [2024-11-06 14:08:48.493233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.341 qpair failed and we were unable to recover it. 00:25:09.341 [2024-11-06 14:08:48.503183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.341 [2024-11-06 14:08:48.503274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.342 [2024-11-06 14:08:48.503287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.342 [2024-11-06 14:08:48.503294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.342 [2024-11-06 14:08:48.503301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.342 [2024-11-06 14:08:48.503315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.342 qpair failed and we were unable to recover it. 00:25:09.342 [2024-11-06 14:08:48.513255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.342 [2024-11-06 14:08:48.513297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.342 [2024-11-06 14:08:48.513310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.342 [2024-11-06 14:08:48.513318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.342 [2024-11-06 14:08:48.513325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.342 [2024-11-06 14:08:48.513339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.342 qpair failed and we were unable to recover it. 00:25:09.342 [2024-11-06 14:08:48.523255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.342 [2024-11-06 14:08:48.523338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.342 [2024-11-06 14:08:48.523352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.342 [2024-11-06 14:08:48.523359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.342 [2024-11-06 14:08:48.523365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.342 [2024-11-06 14:08:48.523379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.342 qpair failed and we were unable to recover it. 00:25:09.342 [2024-11-06 14:08:48.533287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.342 [2024-11-06 14:08:48.533335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.342 [2024-11-06 14:08:48.533349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.342 [2024-11-06 14:08:48.533356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.342 [2024-11-06 14:08:48.533362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.342 [2024-11-06 14:08:48.533376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.342 qpair failed and we were unable to recover it. 00:25:09.342 [2024-11-06 14:08:48.543296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.342 [2024-11-06 14:08:48.543340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.342 [2024-11-06 14:08:48.543353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.342 [2024-11-06 14:08:48.543360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.342 [2024-11-06 14:08:48.543366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.342 [2024-11-06 14:08:48.543380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.342 qpair failed and we were unable to recover it. 00:25:09.342 [2024-11-06 14:08:48.553200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.342 [2024-11-06 14:08:48.553263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.342 [2024-11-06 14:08:48.553276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.342 [2024-11-06 14:08:48.553283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.342 [2024-11-06 14:08:48.553290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.342 [2024-11-06 14:08:48.553303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.342 qpair failed and we were unable to recover it. 00:25:09.342 [2024-11-06 14:08:48.563359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.342 [2024-11-06 14:08:48.563401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.342 [2024-11-06 14:08:48.563418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.342 [2024-11-06 14:08:48.563425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.342 [2024-11-06 14:08:48.563432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.342 [2024-11-06 14:08:48.563445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.342 qpair failed and we were unable to recover it. 00:25:09.342 [2024-11-06 14:08:48.573383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.342 [2024-11-06 14:08:48.573449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.342 [2024-11-06 14:08:48.573462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.342 [2024-11-06 14:08:48.573469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.342 [2024-11-06 14:08:48.573476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.342 [2024-11-06 14:08:48.573489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.342 qpair failed and we were unable to recover it. 00:25:09.342 [2024-11-06 14:08:48.583429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.342 [2024-11-06 14:08:48.583479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.342 [2024-11-06 14:08:48.583492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.342 [2024-11-06 14:08:48.583499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.342 [2024-11-06 14:08:48.583505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.342 [2024-11-06 14:08:48.583519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.342 qpair failed and we were unable to recover it. 00:25:09.342 [2024-11-06 14:08:48.593487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.342 [2024-11-06 14:08:48.593538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.342 [2024-11-06 14:08:48.593551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.342 [2024-11-06 14:08:48.593558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.342 [2024-11-06 14:08:48.593565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.342 [2024-11-06 14:08:48.593579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.342 qpair failed and we were unable to recover it. 00:25:09.342 [2024-11-06 14:08:48.603485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.342 [2024-11-06 14:08:48.603537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.342 [2024-11-06 14:08:48.603550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.342 [2024-11-06 14:08:48.603557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.342 [2024-11-06 14:08:48.603567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.342 [2024-11-06 14:08:48.603581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.342 qpair failed and we were unable to recover it. 00:25:09.342 [2024-11-06 14:08:48.613369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.342 [2024-11-06 14:08:48.613414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.342 [2024-11-06 14:08:48.613430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.342 [2024-11-06 14:08:48.613438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.342 [2024-11-06 14:08:48.613445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.342 [2024-11-06 14:08:48.613459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.342 qpair failed and we were unable to recover it. 00:25:09.604 [2024-11-06 14:08:48.623549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.604 [2024-11-06 14:08:48.623598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.604 [2024-11-06 14:08:48.623613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.604 [2024-11-06 14:08:48.623620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.604 [2024-11-06 14:08:48.623626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.604 [2024-11-06 14:08:48.623640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.604 qpair failed and we were unable to recover it. 00:25:09.604 [2024-11-06 14:08:48.633542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.604 [2024-11-06 14:08:48.633587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.604 [2024-11-06 14:08:48.633600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.604 [2024-11-06 14:08:48.633607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.604 [2024-11-06 14:08:48.633613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.604 [2024-11-06 14:08:48.633627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.604 qpair failed and we were unable to recover it. 00:25:09.604 [2024-11-06 14:08:48.643563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.604 [2024-11-06 14:08:48.643602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.604 [2024-11-06 14:08:48.643615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.604 [2024-11-06 14:08:48.643622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.604 [2024-11-06 14:08:48.643629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.604 [2024-11-06 14:08:48.643642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.604 qpair failed and we were unable to recover it. 00:25:09.604 [2024-11-06 14:08:48.653602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.604 [2024-11-06 14:08:48.653656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.604 [2024-11-06 14:08:48.653669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.604 [2024-11-06 14:08:48.653676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.604 [2024-11-06 14:08:48.653683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.604 [2024-11-06 14:08:48.653697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.604 qpair failed and we were unable to recover it. 00:25:09.604 [2024-11-06 14:08:48.663794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.604 [2024-11-06 14:08:48.663866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.604 [2024-11-06 14:08:48.663880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.604 [2024-11-06 14:08:48.663887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.604 [2024-11-06 14:08:48.663893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.605 [2024-11-06 14:08:48.663906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.605 qpair failed and we were unable to recover it. 00:25:09.605 [2024-11-06 14:08:48.673669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.605 [2024-11-06 14:08:48.673755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.605 [2024-11-06 14:08:48.673768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.605 [2024-11-06 14:08:48.673776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.605 [2024-11-06 14:08:48.673782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.605 [2024-11-06 14:08:48.673796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.605 qpair failed and we were unable to recover it. 00:25:09.605 [2024-11-06 14:08:48.683692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.605 [2024-11-06 14:08:48.683738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.605 [2024-11-06 14:08:48.683751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.605 [2024-11-06 14:08:48.683758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.605 [2024-11-06 14:08:48.683764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.605 [2024-11-06 14:08:48.683778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.605 qpair failed and we were unable to recover it. 00:25:09.605 [2024-11-06 14:08:48.693716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.605 [2024-11-06 14:08:48.693784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.605 [2024-11-06 14:08:48.693799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.605 [2024-11-06 14:08:48.693807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.605 [2024-11-06 14:08:48.693813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.605 [2024-11-06 14:08:48.693827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.605 qpair failed and we were unable to recover it. 00:25:09.605 [2024-11-06 14:08:48.703748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.605 [2024-11-06 14:08:48.703797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.605 [2024-11-06 14:08:48.703811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.605 [2024-11-06 14:08:48.703818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.605 [2024-11-06 14:08:48.703824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.605 [2024-11-06 14:08:48.703838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.605 qpair failed and we were unable to recover it. 00:25:09.605 [2024-11-06 14:08:48.713760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.605 [2024-11-06 14:08:48.713804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.605 [2024-11-06 14:08:48.713817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.605 [2024-11-06 14:08:48.713824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.605 [2024-11-06 14:08:48.713831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.605 [2024-11-06 14:08:48.713845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.605 qpair failed and we were unable to recover it. 00:25:09.605 [2024-11-06 14:08:48.723797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.605 [2024-11-06 14:08:48.723841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.605 [2024-11-06 14:08:48.723854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.605 [2024-11-06 14:08:48.723861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.605 [2024-11-06 14:08:48.723868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.605 [2024-11-06 14:08:48.723881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.605 qpair failed and we were unable to recover it. 00:25:09.605 [2024-11-06 14:08:48.733833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.605 [2024-11-06 14:08:48.733878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.605 [2024-11-06 14:08:48.733891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.605 [2024-11-06 14:08:48.733899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.605 [2024-11-06 14:08:48.733909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.605 [2024-11-06 14:08:48.733922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.605 qpair failed and we were unable to recover it. 00:25:09.605 [2024-11-06 14:08:48.743850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.605 [2024-11-06 14:08:48.743899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.605 [2024-11-06 14:08:48.743912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.605 [2024-11-06 14:08:48.743919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.605 [2024-11-06 14:08:48.743926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.605 [2024-11-06 14:08:48.743939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.605 qpair failed and we were unable to recover it. 00:25:09.605 [2024-11-06 14:08:48.753792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.605 [2024-11-06 14:08:48.753834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.605 [2024-11-06 14:08:48.753847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.605 [2024-11-06 14:08:48.753854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.605 [2024-11-06 14:08:48.753860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.605 [2024-11-06 14:08:48.753874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.605 qpair failed and we were unable to recover it. 00:25:09.605 [2024-11-06 14:08:48.763901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.605 [2024-11-06 14:08:48.763945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.605 [2024-11-06 14:08:48.763958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.605 [2024-11-06 14:08:48.763965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.605 [2024-11-06 14:08:48.763972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.605 [2024-11-06 14:08:48.763985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.605 qpair failed and we were unable to recover it. 00:25:09.605 [2024-11-06 14:08:48.773847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.605 [2024-11-06 14:08:48.773901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.605 [2024-11-06 14:08:48.773915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.605 [2024-11-06 14:08:48.773922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.605 [2024-11-06 14:08:48.773928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.605 [2024-11-06 14:08:48.773941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.605 qpair failed and we were unable to recover it. 00:25:09.605 [2024-11-06 14:08:48.783976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.605 [2024-11-06 14:08:48.784060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.605 [2024-11-06 14:08:48.784077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.605 [2024-11-06 14:08:48.784084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.605 [2024-11-06 14:08:48.784090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.605 [2024-11-06 14:08:48.784105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.605 qpair failed and we were unable to recover it. 00:25:09.605 [2024-11-06 14:08:48.793914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.605 [2024-11-06 14:08:48.793963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.605 [2024-11-06 14:08:48.793988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.605 [2024-11-06 14:08:48.793997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.605 [2024-11-06 14:08:48.794004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.605 [2024-11-06 14:08:48.794025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.606 qpair failed and we were unable to recover it. 00:25:09.606 [2024-11-06 14:08:48.804027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.606 [2024-11-06 14:08:48.804074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.606 [2024-11-06 14:08:48.804090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.606 [2024-11-06 14:08:48.804097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.606 [2024-11-06 14:08:48.804104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.606 [2024-11-06 14:08:48.804119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.606 qpair failed and we were unable to recover it. 00:25:09.606 [2024-11-06 14:08:48.814079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.606 [2024-11-06 14:08:48.814166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.606 [2024-11-06 14:08:48.814179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.606 [2024-11-06 14:08:48.814186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.606 [2024-11-06 14:08:48.814193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.606 [2024-11-06 14:08:48.814207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.606 qpair failed and we were unable to recover it. 00:25:09.606 [2024-11-06 14:08:48.824060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.606 [2024-11-06 14:08:48.824109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.606 [2024-11-06 14:08:48.824126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.606 [2024-11-06 14:08:48.824134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.606 [2024-11-06 14:08:48.824140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.606 [2024-11-06 14:08:48.824154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.606 qpair failed and we were unable to recover it. 00:25:09.606 [2024-11-06 14:08:48.834108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.606 [2024-11-06 14:08:48.834153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.606 [2024-11-06 14:08:48.834167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.606 [2024-11-06 14:08:48.834174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.606 [2024-11-06 14:08:48.834181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.606 [2024-11-06 14:08:48.834195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.606 qpair failed and we were unable to recover it. 00:25:09.606 [2024-11-06 14:08:48.844123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.606 [2024-11-06 14:08:48.844184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.606 [2024-11-06 14:08:48.844200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.606 [2024-11-06 14:08:48.844207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.606 [2024-11-06 14:08:48.844214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.606 [2024-11-06 14:08:48.844232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.606 qpair failed and we were unable to recover it. 00:25:09.606 [2024-11-06 14:08:48.854159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.606 [2024-11-06 14:08:48.854204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.606 [2024-11-06 14:08:48.854219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.606 [2024-11-06 14:08:48.854226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.606 [2024-11-06 14:08:48.854233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.606 [2024-11-06 14:08:48.854251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.606 qpair failed and we were unable to recover it. 00:25:09.606 [2024-11-06 14:08:48.864199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.606 [2024-11-06 14:08:48.864302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.606 [2024-11-06 14:08:48.864316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.606 [2024-11-06 14:08:48.864324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.606 [2024-11-06 14:08:48.864334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.606 [2024-11-06 14:08:48.864348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.606 qpair failed and we were unable to recover it. 00:25:09.606 [2024-11-06 14:08:48.874206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.606 [2024-11-06 14:08:48.874255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.606 [2024-11-06 14:08:48.874269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.606 [2024-11-06 14:08:48.874277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.606 [2024-11-06 14:08:48.874283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.606 [2024-11-06 14:08:48.874297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.606 qpair failed and we were unable to recover it. 00:25:09.606 [2024-11-06 14:08:48.884209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.606 [2024-11-06 14:08:48.884258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.606 [2024-11-06 14:08:48.884271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.606 [2024-11-06 14:08:48.884279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.606 [2024-11-06 14:08:48.884285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.606 [2024-11-06 14:08:48.884299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.606 qpair failed and we were unable to recover it. 00:25:09.867 [2024-11-06 14:08:48.894266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.867 [2024-11-06 14:08:48.894331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.867 [2024-11-06 14:08:48.894345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.867 [2024-11-06 14:08:48.894352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.867 [2024-11-06 14:08:48.894358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.867 [2024-11-06 14:08:48.894372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.867 qpair failed and we were unable to recover it. 00:25:09.867 [2024-11-06 14:08:48.904315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.867 [2024-11-06 14:08:48.904396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.867 [2024-11-06 14:08:48.904410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.867 [2024-11-06 14:08:48.904417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.867 [2024-11-06 14:08:48.904423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.867 [2024-11-06 14:08:48.904437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.867 qpair failed and we were unable to recover it. 00:25:09.867 [2024-11-06 14:08:48.914185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.867 [2024-11-06 14:08:48.914266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.867 [2024-11-06 14:08:48.914280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.867 [2024-11-06 14:08:48.914287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.867 [2024-11-06 14:08:48.914294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.867 [2024-11-06 14:08:48.914307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.867 qpair failed and we were unable to recover it. 00:25:09.867 [2024-11-06 14:08:48.924318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.867 [2024-11-06 14:08:48.924357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.867 [2024-11-06 14:08:48.924370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.868 [2024-11-06 14:08:48.924377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.868 [2024-11-06 14:08:48.924383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.868 [2024-11-06 14:08:48.924397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.868 qpair failed and we were unable to recover it. 00:25:09.868 [2024-11-06 14:08:48.934255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.868 [2024-11-06 14:08:48.934302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.868 [2024-11-06 14:08:48.934317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.868 [2024-11-06 14:08:48.934324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.868 [2024-11-06 14:08:48.934330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.868 [2024-11-06 14:08:48.934345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.868 qpair failed and we were unable to recover it. 00:25:09.868 [2024-11-06 14:08:48.944449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.868 [2024-11-06 14:08:48.944495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.868 [2024-11-06 14:08:48.944509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.868 [2024-11-06 14:08:48.944516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.868 [2024-11-06 14:08:48.944522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.868 [2024-11-06 14:08:48.944536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.868 qpair failed and we were unable to recover it. 00:25:09.868 [2024-11-06 14:08:48.954419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.868 [2024-11-06 14:08:48.954469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.868 [2024-11-06 14:08:48.954485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.868 [2024-11-06 14:08:48.954493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.868 [2024-11-06 14:08:48.954499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.868 [2024-11-06 14:08:48.954513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.868 qpair failed and we were unable to recover it. 00:25:09.868 [2024-11-06 14:08:48.964315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.868 [2024-11-06 14:08:48.964360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.868 [2024-11-06 14:08:48.964373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.868 [2024-11-06 14:08:48.964380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.868 [2024-11-06 14:08:48.964386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.868 [2024-11-06 14:08:48.964400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.868 qpair failed and we were unable to recover it. 00:25:09.868 [2024-11-06 14:08:48.974483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.868 [2024-11-06 14:08:48.974539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.868 [2024-11-06 14:08:48.974552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.868 [2024-11-06 14:08:48.974559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.868 [2024-11-06 14:08:48.974566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.868 [2024-11-06 14:08:48.974579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.868 qpair failed and we were unable to recover it. 00:25:09.868 [2024-11-06 14:08:48.984523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.868 [2024-11-06 14:08:48.984574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.868 [2024-11-06 14:08:48.984587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.868 [2024-11-06 14:08:48.984594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.868 [2024-11-06 14:08:48.984600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.868 [2024-11-06 14:08:48.984614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.868 qpair failed and we were unable to recover it. 00:25:09.868 [2024-11-06 14:08:48.994548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.868 [2024-11-06 14:08:48.994588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.868 [2024-11-06 14:08:48.994602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.868 [2024-11-06 14:08:48.994609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.868 [2024-11-06 14:08:48.994619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.868 [2024-11-06 14:08:48.994633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.868 qpair failed and we were unable to recover it. 00:25:09.868 [2024-11-06 14:08:49.004577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.868 [2024-11-06 14:08:49.004655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.868 [2024-11-06 14:08:49.004668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.868 [2024-11-06 14:08:49.004676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.868 [2024-11-06 14:08:49.004683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.868 [2024-11-06 14:08:49.004697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.868 qpair failed and we were unable to recover it. 00:25:09.868 [2024-11-06 14:08:49.014477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.868 [2024-11-06 14:08:49.014533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.868 [2024-11-06 14:08:49.014546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.868 [2024-11-06 14:08:49.014553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.868 [2024-11-06 14:08:49.014560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.868 [2024-11-06 14:08:49.014573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.868 qpair failed and we were unable to recover it. 00:25:09.868 [2024-11-06 14:08:49.024611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.868 [2024-11-06 14:08:49.024655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.868 [2024-11-06 14:08:49.024668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.868 [2024-11-06 14:08:49.024675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.868 [2024-11-06 14:08:49.024682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.868 [2024-11-06 14:08:49.024695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.868 qpair failed and we were unable to recover it. 00:25:09.868 [2024-11-06 14:08:49.034652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.868 [2024-11-06 14:08:49.034708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.868 [2024-11-06 14:08:49.034724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.868 [2024-11-06 14:08:49.034731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.868 [2024-11-06 14:08:49.034737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.868 [2024-11-06 14:08:49.034751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.868 qpair failed and we were unable to recover it. 00:25:09.868 [2024-11-06 14:08:49.044595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.868 [2024-11-06 14:08:49.044641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.868 [2024-11-06 14:08:49.044654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.868 [2024-11-06 14:08:49.044661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.868 [2024-11-06 14:08:49.044667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.868 [2024-11-06 14:08:49.044681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.868 qpair failed and we were unable to recover it. 00:25:09.868 [2024-11-06 14:08:49.054778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.868 [2024-11-06 14:08:49.054869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.868 [2024-11-06 14:08:49.054882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.869 [2024-11-06 14:08:49.054889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.869 [2024-11-06 14:08:49.054895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.869 [2024-11-06 14:08:49.054909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.869 qpair failed and we were unable to recover it. 00:25:09.869 [2024-11-06 14:08:49.064719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.869 [2024-11-06 14:08:49.064767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.869 [2024-11-06 14:08:49.064781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.869 [2024-11-06 14:08:49.064788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.869 [2024-11-06 14:08:49.064794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.869 [2024-11-06 14:08:49.064808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.869 qpair failed and we were unable to recover it. 00:25:09.869 [2024-11-06 14:08:49.074660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.869 [2024-11-06 14:08:49.074701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.869 [2024-11-06 14:08:49.074714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.869 [2024-11-06 14:08:49.074721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.869 [2024-11-06 14:08:49.074727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.869 [2024-11-06 14:08:49.074741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.869 qpair failed and we were unable to recover it. 00:25:09.869 [2024-11-06 14:08:49.084779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.869 [2024-11-06 14:08:49.084824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.869 [2024-11-06 14:08:49.084840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.869 [2024-11-06 14:08:49.084847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.869 [2024-11-06 14:08:49.084853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.869 [2024-11-06 14:08:49.084867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.869 qpair failed and we were unable to recover it. 00:25:09.869 [2024-11-06 14:08:49.094817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.869 [2024-11-06 14:08:49.094864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.869 [2024-11-06 14:08:49.094877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.869 [2024-11-06 14:08:49.094884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.869 [2024-11-06 14:08:49.094890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.869 [2024-11-06 14:08:49.094904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.869 qpair failed and we were unable to recover it. 00:25:09.869 [2024-11-06 14:08:49.104843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.869 [2024-11-06 14:08:49.104894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.869 [2024-11-06 14:08:49.104907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.869 [2024-11-06 14:08:49.104914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.869 [2024-11-06 14:08:49.104921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.869 [2024-11-06 14:08:49.104935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.869 qpair failed and we were unable to recover it. 00:25:09.869 [2024-11-06 14:08:49.114868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.869 [2024-11-06 14:08:49.114911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.869 [2024-11-06 14:08:49.114924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.869 [2024-11-06 14:08:49.114931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.869 [2024-11-06 14:08:49.114937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.869 [2024-11-06 14:08:49.114951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.869 qpair failed and we were unable to recover it. 00:25:09.869 [2024-11-06 14:08:49.124771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.869 [2024-11-06 14:08:49.124815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.869 [2024-11-06 14:08:49.124828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.869 [2024-11-06 14:08:49.124835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.869 [2024-11-06 14:08:49.124845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.869 [2024-11-06 14:08:49.124859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.869 qpair failed and we were unable to recover it. 00:25:09.869 [2024-11-06 14:08:49.134933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.869 [2024-11-06 14:08:49.134979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.869 [2024-11-06 14:08:49.134992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.869 [2024-11-06 14:08:49.134999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.869 [2024-11-06 14:08:49.135006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.869 [2024-11-06 14:08:49.135020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.869 qpair failed and we were unable to recover it. 00:25:09.869 [2024-11-06 14:08:49.144838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.869 [2024-11-06 14:08:49.144885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.869 [2024-11-06 14:08:49.144898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.869 [2024-11-06 14:08:49.144905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.869 [2024-11-06 14:08:49.144911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:09.869 [2024-11-06 14:08:49.144925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:09.869 qpair failed and we were unable to recover it. 00:25:10.134 [2024-11-06 14:08:49.154980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.134 [2024-11-06 14:08:49.155025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.134 [2024-11-06 14:08:49.155039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.134 [2024-11-06 14:08:49.155046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.134 [2024-11-06 14:08:49.155052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.134 [2024-11-06 14:08:49.155066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-11-06 14:08:49.165008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.134 [2024-11-06 14:08:49.165053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.134 [2024-11-06 14:08:49.165067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.134 [2024-11-06 14:08:49.165074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.134 [2024-11-06 14:08:49.165080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.134 [2024-11-06 14:08:49.165094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-11-06 14:08:49.175047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.134 [2024-11-06 14:08:49.175093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.134 [2024-11-06 14:08:49.175106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.134 [2024-11-06 14:08:49.175113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.134 [2024-11-06 14:08:49.175119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.134 [2024-11-06 14:08:49.175133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-11-06 14:08:49.184959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.134 [2024-11-06 14:08:49.185004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.134 [2024-11-06 14:08:49.185017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.134 [2024-11-06 14:08:49.185025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.134 [2024-11-06 14:08:49.185032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.134 [2024-11-06 14:08:49.185045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-11-06 14:08:49.195095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.134 [2024-11-06 14:08:49.195156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.134 [2024-11-06 14:08:49.195169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.134 [2024-11-06 14:08:49.195176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.134 [2024-11-06 14:08:49.195182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.134 [2024-11-06 14:08:49.195196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-11-06 14:08:49.205130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.134 [2024-11-06 14:08:49.205182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.134 [2024-11-06 14:08:49.205197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.134 [2024-11-06 14:08:49.205204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.134 [2024-11-06 14:08:49.205210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.134 [2024-11-06 14:08:49.205224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-11-06 14:08:49.215072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.134 [2024-11-06 14:08:49.215124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.134 [2024-11-06 14:08:49.215141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.134 [2024-11-06 14:08:49.215148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.134 [2024-11-06 14:08:49.215154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.134 [2024-11-06 14:08:49.215167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-11-06 14:08:49.225065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.134 [2024-11-06 14:08:49.225109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.134 [2024-11-06 14:08:49.225122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.134 [2024-11-06 14:08:49.225129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.134 [2024-11-06 14:08:49.225136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.134 [2024-11-06 14:08:49.225149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-11-06 14:08:49.235220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.134 [2024-11-06 14:08:49.235271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.134 [2024-11-06 14:08:49.235284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.134 [2024-11-06 14:08:49.235291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.134 [2024-11-06 14:08:49.235297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.134 [2024-11-06 14:08:49.235312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-11-06 14:08:49.245249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.134 [2024-11-06 14:08:49.245296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.134 [2024-11-06 14:08:49.245311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.134 [2024-11-06 14:08:49.245318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.134 [2024-11-06 14:08:49.245324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.134 [2024-11-06 14:08:49.245339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-11-06 14:08:49.255279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.134 [2024-11-06 14:08:49.255325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.134 [2024-11-06 14:08:49.255338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.134 [2024-11-06 14:08:49.255345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.134 [2024-11-06 14:08:49.255355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.135 [2024-11-06 14:08:49.255369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-11-06 14:08:49.265307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.135 [2024-11-06 14:08:49.265354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.135 [2024-11-06 14:08:49.265368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.135 [2024-11-06 14:08:49.265376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.135 [2024-11-06 14:08:49.265382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.135 [2024-11-06 14:08:49.265396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-11-06 14:08:49.275354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.135 [2024-11-06 14:08:49.275431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.135 [2024-11-06 14:08:49.275444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.135 [2024-11-06 14:08:49.275451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.135 [2024-11-06 14:08:49.275458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.135 [2024-11-06 14:08:49.275471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-11-06 14:08:49.285325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.135 [2024-11-06 14:08:49.285368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.135 [2024-11-06 14:08:49.285381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.135 [2024-11-06 14:08:49.285388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.135 [2024-11-06 14:08:49.285395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.135 [2024-11-06 14:08:49.285409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-11-06 14:08:49.295401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.135 [2024-11-06 14:08:49.295493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.135 [2024-11-06 14:08:49.295506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.135 [2024-11-06 14:08:49.295513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.135 [2024-11-06 14:08:49.295519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.135 [2024-11-06 14:08:49.295533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-11-06 14:08:49.305396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.135 [2024-11-06 14:08:49.305448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.135 [2024-11-06 14:08:49.305462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.135 [2024-11-06 14:08:49.305469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.135 [2024-11-06 14:08:49.305475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.135 [2024-11-06 14:08:49.305489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-11-06 14:08:49.315446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.135 [2024-11-06 14:08:49.315535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.135 [2024-11-06 14:08:49.315548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.135 [2024-11-06 14:08:49.315556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.135 [2024-11-06 14:08:49.315563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.135 [2024-11-06 14:08:49.315577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-11-06 14:08:49.325480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.135 [2024-11-06 14:08:49.325522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.135 [2024-11-06 14:08:49.325536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.135 [2024-11-06 14:08:49.325543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.135 [2024-11-06 14:08:49.325549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.135 [2024-11-06 14:08:49.325562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-11-06 14:08:49.335492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.135 [2024-11-06 14:08:49.335537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.135 [2024-11-06 14:08:49.335550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.135 [2024-11-06 14:08:49.335557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.135 [2024-11-06 14:08:49.335564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.135 [2024-11-06 14:08:49.335577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-11-06 14:08:49.345496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.135 [2024-11-06 14:08:49.345552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.135 [2024-11-06 14:08:49.345568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.135 [2024-11-06 14:08:49.345575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.135 [2024-11-06 14:08:49.345582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.135 [2024-11-06 14:08:49.345596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-11-06 14:08:49.355514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.135 [2024-11-06 14:08:49.355556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.135 [2024-11-06 14:08:49.355569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.135 [2024-11-06 14:08:49.355576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.135 [2024-11-06 14:08:49.355582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.135 [2024-11-06 14:08:49.355595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-11-06 14:08:49.365427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.135 [2024-11-06 14:08:49.365472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.135 [2024-11-06 14:08:49.365486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.135 [2024-11-06 14:08:49.365492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.135 [2024-11-06 14:08:49.365499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.135 [2024-11-06 14:08:49.365512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-11-06 14:08:49.375585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.135 [2024-11-06 14:08:49.375629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.135 [2024-11-06 14:08:49.375643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.136 [2024-11-06 14:08:49.375649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.136 [2024-11-06 14:08:49.375656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.136 [2024-11-06 14:08:49.375669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-11-06 14:08:49.385592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.136 [2024-11-06 14:08:49.385642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.136 [2024-11-06 14:08:49.385655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.136 [2024-11-06 14:08:49.385661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.136 [2024-11-06 14:08:49.385671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.136 [2024-11-06 14:08:49.385685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-11-06 14:08:49.395619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.136 [2024-11-06 14:08:49.395662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.136 [2024-11-06 14:08:49.395675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.136 [2024-11-06 14:08:49.395682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.136 [2024-11-06 14:08:49.395688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.136 [2024-11-06 14:08:49.395702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-11-06 14:08:49.405540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.136 [2024-11-06 14:08:49.405587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.136 [2024-11-06 14:08:49.405604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.136 [2024-11-06 14:08:49.405611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.136 [2024-11-06 14:08:49.405617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.136 [2024-11-06 14:08:49.405633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.494 [2024-11-06 14:08:49.415663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.494 [2024-11-06 14:08:49.415707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.494 [2024-11-06 14:08:49.415721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.494 [2024-11-06 14:08:49.415728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.494 [2024-11-06 14:08:49.415735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.494 [2024-11-06 14:08:49.415749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.494 qpair failed and we were unable to recover it. 00:25:10.494 [2024-11-06 14:08:49.425702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.494 [2024-11-06 14:08:49.425752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.494 [2024-11-06 14:08:49.425766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.494 [2024-11-06 14:08:49.425773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.494 [2024-11-06 14:08:49.425780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.494 [2024-11-06 14:08:49.425793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.494 qpair failed and we were unable to recover it. 00:25:10.494 [2024-11-06 14:08:49.435732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.494 [2024-11-06 14:08:49.435778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.494 [2024-11-06 14:08:49.435791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.494 [2024-11-06 14:08:49.435798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.494 [2024-11-06 14:08:49.435805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.494 [2024-11-06 14:08:49.435820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.494 qpair failed and we were unable to recover it. 00:25:10.494 [2024-11-06 14:08:49.445768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.494 [2024-11-06 14:08:49.445812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.494 [2024-11-06 14:08:49.445825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.494 [2024-11-06 14:08:49.445832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.494 [2024-11-06 14:08:49.445839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.494 [2024-11-06 14:08:49.445852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.494 qpair failed and we were unable to recover it. 00:25:10.494 [2024-11-06 14:08:49.455793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.494 [2024-11-06 14:08:49.455840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.494 [2024-11-06 14:08:49.455855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.494 [2024-11-06 14:08:49.455862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.494 [2024-11-06 14:08:49.455868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.494 [2024-11-06 14:08:49.455882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.494 qpair failed and we were unable to recover it. 00:25:10.494 [2024-11-06 14:08:49.465827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.494 [2024-11-06 14:08:49.465875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.494 [2024-11-06 14:08:49.465889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.494 [2024-11-06 14:08:49.465896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.494 [2024-11-06 14:08:49.465902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.494 [2024-11-06 14:08:49.465916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.494 qpair failed and we were unable to recover it. 00:25:10.494 [2024-11-06 14:08:49.475852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.494 [2024-11-06 14:08:49.475927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.494 [2024-11-06 14:08:49.475944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.494 [2024-11-06 14:08:49.475951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.494 [2024-11-06 14:08:49.475957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.494 [2024-11-06 14:08:49.475971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.494 qpair failed and we were unable to recover it. 00:25:10.494 [2024-11-06 14:08:49.485737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.494 [2024-11-06 14:08:49.485778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.494 [2024-11-06 14:08:49.485792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.494 [2024-11-06 14:08:49.485799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.494 [2024-11-06 14:08:49.485805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.494 [2024-11-06 14:08:49.485819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.494 qpair failed and we were unable to recover it. 00:25:10.494 [2024-11-06 14:08:49.495910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.494 [2024-11-06 14:08:49.495991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.494 [2024-11-06 14:08:49.496004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.494 [2024-11-06 14:08:49.496011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.494 [2024-11-06 14:08:49.496019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.494 [2024-11-06 14:08:49.496033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.494 qpair failed and we were unable to recover it. 00:25:10.494 [2024-11-06 14:08:49.505916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.494 [2024-11-06 14:08:49.505967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.494 [2024-11-06 14:08:49.505982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.494 [2024-11-06 14:08:49.505989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.494 [2024-11-06 14:08:49.505996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.494 [2024-11-06 14:08:49.506015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.494 qpair failed and we were unable to recover it. 00:25:10.494 [2024-11-06 14:08:49.515822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.494 [2024-11-06 14:08:49.515871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.494 [2024-11-06 14:08:49.515885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.494 [2024-11-06 14:08:49.515892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.494 [2024-11-06 14:08:49.515902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.494 [2024-11-06 14:08:49.515916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.494 qpair failed and we were unable to recover it. 00:25:10.494 [2024-11-06 14:08:49.525973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.494 [2024-11-06 14:08:49.526027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.494 [2024-11-06 14:08:49.526041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.494 [2024-11-06 14:08:49.526048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.494 [2024-11-06 14:08:49.526054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.494 [2024-11-06 14:08:49.526069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.494 qpair failed and we were unable to recover it. 00:25:10.494 [2024-11-06 14:08:49.536023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.495 [2024-11-06 14:08:49.536076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.495 [2024-11-06 14:08:49.536101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.495 [2024-11-06 14:08:49.536109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.495 [2024-11-06 14:08:49.536116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.495 [2024-11-06 14:08:49.536135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.495 qpair failed and we were unable to recover it. 00:25:10.495 [2024-11-06 14:08:49.546078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.495 [2024-11-06 14:08:49.546133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.495 [2024-11-06 14:08:49.546158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.495 [2024-11-06 14:08:49.546167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.495 [2024-11-06 14:08:49.546174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.495 [2024-11-06 14:08:49.546194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.495 qpair failed and we were unable to recover it. 00:25:10.495 [2024-11-06 14:08:49.556075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.495 [2024-11-06 14:08:49.556123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.495 [2024-11-06 14:08:49.556139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.495 [2024-11-06 14:08:49.556146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.495 [2024-11-06 14:08:49.556152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.495 [2024-11-06 14:08:49.556168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.495 qpair failed and we were unable to recover it. 00:25:10.495 [2024-11-06 14:08:49.566086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.495 [2024-11-06 14:08:49.566149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.495 [2024-11-06 14:08:49.566163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.495 [2024-11-06 14:08:49.566170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.495 [2024-11-06 14:08:49.566177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.495 [2024-11-06 14:08:49.566191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.495 qpair failed and we were unable to recover it. 00:25:10.495 [2024-11-06 14:08:49.576173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.495 [2024-11-06 14:08:49.576230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.495 [2024-11-06 14:08:49.576248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.495 [2024-11-06 14:08:49.576256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.495 [2024-11-06 14:08:49.576262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.495 [2024-11-06 14:08:49.576277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.495 qpair failed and we were unable to recover it. 00:25:10.495 [2024-11-06 14:08:49.586159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.495 [2024-11-06 14:08:49.586208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.495 [2024-11-06 14:08:49.586221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.495 [2024-11-06 14:08:49.586229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.495 [2024-11-06 14:08:49.586235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.495 [2024-11-06 14:08:49.586253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.495 qpair failed and we were unable to recover it. 00:25:10.495 [2024-11-06 14:08:49.596086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.495 [2024-11-06 14:08:49.596129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.495 [2024-11-06 14:08:49.596144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.495 [2024-11-06 14:08:49.596151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.495 [2024-11-06 14:08:49.596158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.495 [2024-11-06 14:08:49.596173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.495 qpair failed and we were unable to recover it. 00:25:10.495 [2024-11-06 14:08:49.606204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.495 [2024-11-06 14:08:49.606255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.495 [2024-11-06 14:08:49.606274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.495 [2024-11-06 14:08:49.606281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.495 [2024-11-06 14:08:49.606287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.495 [2024-11-06 14:08:49.606302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.495 qpair failed and we were unable to recover it. 00:25:10.495 [2024-11-06 14:08:49.616176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.495 [2024-11-06 14:08:49.616251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.495 [2024-11-06 14:08:49.616265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.495 [2024-11-06 14:08:49.616272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.495 [2024-11-06 14:08:49.616278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.495 [2024-11-06 14:08:49.616292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.495 qpair failed and we were unable to recover it. 00:25:10.495 [2024-11-06 14:08:49.626274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.495 [2024-11-06 14:08:49.626320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.495 [2024-11-06 14:08:49.626333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.495 [2024-11-06 14:08:49.626340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.495 [2024-11-06 14:08:49.626346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.495 [2024-11-06 14:08:49.626360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.495 qpair failed and we were unable to recover it. 00:25:10.495 [2024-11-06 14:08:49.636341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.495 [2024-11-06 14:08:49.636382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.495 [2024-11-06 14:08:49.636396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.495 [2024-11-06 14:08:49.636403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.495 [2024-11-06 14:08:49.636409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.495 [2024-11-06 14:08:49.636423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.495 qpair failed and we were unable to recover it. 00:25:10.495 [2024-11-06 14:08:49.646302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.495 [2024-11-06 14:08:49.646346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.495 [2024-11-06 14:08:49.646359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.495 [2024-11-06 14:08:49.646366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.495 [2024-11-06 14:08:49.646376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.495 [2024-11-06 14:08:49.646390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.495 qpair failed and we were unable to recover it. 00:25:10.495 [2024-11-06 14:08:49.656358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.495 [2024-11-06 14:08:49.656406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.495 [2024-11-06 14:08:49.656420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.495 [2024-11-06 14:08:49.656428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.495 [2024-11-06 14:08:49.656434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.495 [2024-11-06 14:08:49.656449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.495 qpair failed and we were unable to recover it. 00:25:10.495 [2024-11-06 14:08:49.666361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.495 [2024-11-06 14:08:49.666405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.495 [2024-11-06 14:08:49.666420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.495 [2024-11-06 14:08:49.666427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.495 [2024-11-06 14:08:49.666433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.495 [2024-11-06 14:08:49.666448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.495 qpair failed and we were unable to recover it. 00:25:10.495 [2024-11-06 14:08:49.676390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.495 [2024-11-06 14:08:49.676437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.495 [2024-11-06 14:08:49.676451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.495 [2024-11-06 14:08:49.676458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.495 [2024-11-06 14:08:49.676465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.495 [2024-11-06 14:08:49.676479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.495 qpair failed and we were unable to recover it. 00:25:10.495 [2024-11-06 14:08:49.686370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.495 [2024-11-06 14:08:49.686410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.495 [2024-11-06 14:08:49.686424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.495 [2024-11-06 14:08:49.686431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.495 [2024-11-06 14:08:49.686438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.495 [2024-11-06 14:08:49.686451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.495 qpair failed and we were unable to recover it. 00:25:10.495 [2024-11-06 14:08:49.696436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.495 [2024-11-06 14:08:49.696482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.495 [2024-11-06 14:08:49.696495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.495 [2024-11-06 14:08:49.696502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.495 [2024-11-06 14:08:49.696508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.495 [2024-11-06 14:08:49.696522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.495 qpair failed and we were unable to recover it. 00:25:10.495 [2024-11-06 14:08:49.706488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.495 [2024-11-06 14:08:49.706578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.495 [2024-11-06 14:08:49.706592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.495 [2024-11-06 14:08:49.706598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.495 [2024-11-06 14:08:49.706605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.495 [2024-11-06 14:08:49.706618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.495 qpair failed and we were unable to recover it. 00:25:10.495 [2024-11-06 14:08:49.716497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.495 [2024-11-06 14:08:49.716537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.495 [2024-11-06 14:08:49.716551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.495 [2024-11-06 14:08:49.716558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.495 [2024-11-06 14:08:49.716564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.495 [2024-11-06 14:08:49.716577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.495 qpair failed and we were unable to recover it. 00:25:10.495 [2024-11-06 14:08:49.726550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.495 [2024-11-06 14:08:49.726594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.495 [2024-11-06 14:08:49.726608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.495 [2024-11-06 14:08:49.726615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.495 [2024-11-06 14:08:49.726621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.495 [2024-11-06 14:08:49.726635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.495 qpair failed and we were unable to recover it. 00:25:10.495 [2024-11-06 14:08:49.736551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.495 [2024-11-06 14:08:49.736596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.495 [2024-11-06 14:08:49.736613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.495 [2024-11-06 14:08:49.736620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.495 [2024-11-06 14:08:49.736627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.495 [2024-11-06 14:08:49.736640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.495 qpair failed and we were unable to recover it. 00:25:10.495 [2024-11-06 14:08:49.746593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.495 [2024-11-06 14:08:49.746643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.495 [2024-11-06 14:08:49.746656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.495 [2024-11-06 14:08:49.746663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.495 [2024-11-06 14:08:49.746670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.495 [2024-11-06 14:08:49.746683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.495 qpair failed and we were unable to recover it. 00:25:10.495 [2024-11-06 14:08:49.756532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.495 [2024-11-06 14:08:49.756603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.495 [2024-11-06 14:08:49.756616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.495 [2024-11-06 14:08:49.756623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.495 [2024-11-06 14:08:49.756629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.495 [2024-11-06 14:08:49.756643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.495 qpair failed and we were unable to recover it. 00:25:10.495 [2024-11-06 14:08:49.766514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.495 [2024-11-06 14:08:49.766555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.495 [2024-11-06 14:08:49.766568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.495 [2024-11-06 14:08:49.766575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.495 [2024-11-06 14:08:49.766581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.495 [2024-11-06 14:08:49.766594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.495 qpair failed and we were unable to recover it. 00:25:10.756 [2024-11-06 14:08:49.776686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.756 [2024-11-06 14:08:49.776735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.756 [2024-11-06 14:08:49.776748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.756 [2024-11-06 14:08:49.776758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.756 [2024-11-06 14:08:49.776765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.756 [2024-11-06 14:08:49.776778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-11-06 14:08:49.786718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.756 [2024-11-06 14:08:49.786766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.756 [2024-11-06 14:08:49.786780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.756 [2024-11-06 14:08:49.786787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.756 [2024-11-06 14:08:49.786793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.756 [2024-11-06 14:08:49.786806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-11-06 14:08:49.796739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.756 [2024-11-06 14:08:49.796780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.756 [2024-11-06 14:08:49.796793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.756 [2024-11-06 14:08:49.796800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.756 [2024-11-06 14:08:49.796806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.756 [2024-11-06 14:08:49.796820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-11-06 14:08:49.806721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.756 [2024-11-06 14:08:49.806763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.756 [2024-11-06 14:08:49.806776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.756 [2024-11-06 14:08:49.806783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.756 [2024-11-06 14:08:49.806790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.756 [2024-11-06 14:08:49.806803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-11-06 14:08:49.816781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.756 [2024-11-06 14:08:49.816823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.756 [2024-11-06 14:08:49.816836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.756 [2024-11-06 14:08:49.816843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.756 [2024-11-06 14:08:49.816849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.756 [2024-11-06 14:08:49.816863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-11-06 14:08:49.826814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.756 [2024-11-06 14:08:49.826861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.756 [2024-11-06 14:08:49.826874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.756 [2024-11-06 14:08:49.826881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.756 [2024-11-06 14:08:49.826887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.756 [2024-11-06 14:08:49.826901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-11-06 14:08:49.836822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.756 [2024-11-06 14:08:49.836872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.756 [2024-11-06 14:08:49.836885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.756 [2024-11-06 14:08:49.836892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.757 [2024-11-06 14:08:49.836898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.757 [2024-11-06 14:08:49.836912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-11-06 14:08:49.846717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.757 [2024-11-06 14:08:49.846761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.757 [2024-11-06 14:08:49.846774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.757 [2024-11-06 14:08:49.846781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.757 [2024-11-06 14:08:49.846788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.757 [2024-11-06 14:08:49.846801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-11-06 14:08:49.856893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.757 [2024-11-06 14:08:49.856940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.757 [2024-11-06 14:08:49.856953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.757 [2024-11-06 14:08:49.856960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.757 [2024-11-06 14:08:49.856966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.757 [2024-11-06 14:08:49.856980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-11-06 14:08:49.866934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.757 [2024-11-06 14:08:49.866993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.757 [2024-11-06 14:08:49.867023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.757 [2024-11-06 14:08:49.867032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.757 [2024-11-06 14:08:49.867039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.757 [2024-11-06 14:08:49.867058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-11-06 14:08:49.876935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.757 [2024-11-06 14:08:49.876991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.757 [2024-11-06 14:08:49.877017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.757 [2024-11-06 14:08:49.877026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.757 [2024-11-06 14:08:49.877033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.757 [2024-11-06 14:08:49.877052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-11-06 14:08:49.886956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.757 [2024-11-06 14:08:49.887023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.757 [2024-11-06 14:08:49.887039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.757 [2024-11-06 14:08:49.887046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.757 [2024-11-06 14:08:49.887053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.757 [2024-11-06 14:08:49.887068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-11-06 14:08:49.897007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.757 [2024-11-06 14:08:49.897056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.757 [2024-11-06 14:08:49.897070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.757 [2024-11-06 14:08:49.897077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.757 [2024-11-06 14:08:49.897083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.757 [2024-11-06 14:08:49.897097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-11-06 14:08:49.907026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.757 [2024-11-06 14:08:49.907081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.757 [2024-11-06 14:08:49.907095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.757 [2024-11-06 14:08:49.907106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.757 [2024-11-06 14:08:49.907113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.757 [2024-11-06 14:08:49.907127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-11-06 14:08:49.917043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.757 [2024-11-06 14:08:49.917101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.757 [2024-11-06 14:08:49.917114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.757 [2024-11-06 14:08:49.917121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.757 [2024-11-06 14:08:49.917127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.757 [2024-11-06 14:08:49.917141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-11-06 14:08:49.927075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.757 [2024-11-06 14:08:49.927115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.757 [2024-11-06 14:08:49.927128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.757 [2024-11-06 14:08:49.927135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.757 [2024-11-06 14:08:49.927141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.757 [2024-11-06 14:08:49.927155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-11-06 14:08:49.937118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.757 [2024-11-06 14:08:49.937203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.757 [2024-11-06 14:08:49.937216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.757 [2024-11-06 14:08:49.937223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.757 [2024-11-06 14:08:49.937229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.757 [2024-11-06 14:08:49.937243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-11-06 14:08:49.947147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.757 [2024-11-06 14:08:49.947240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.757 [2024-11-06 14:08:49.947257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.757 [2024-11-06 14:08:49.947264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.757 [2024-11-06 14:08:49.947270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.757 [2024-11-06 14:08:49.947284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-11-06 14:08:49.957155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.757 [2024-11-06 14:08:49.957252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.758 [2024-11-06 14:08:49.957266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.758 [2024-11-06 14:08:49.957273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.758 [2024-11-06 14:08:49.957280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.758 [2024-11-06 14:08:49.957294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-11-06 14:08:49.967195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.758 [2024-11-06 14:08:49.967241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.758 [2024-11-06 14:08:49.967258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.758 [2024-11-06 14:08:49.967265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.758 [2024-11-06 14:08:49.967272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.758 [2024-11-06 14:08:49.967286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-11-06 14:08:49.977279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.758 [2024-11-06 14:08:49.977328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.758 [2024-11-06 14:08:49.977342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.758 [2024-11-06 14:08:49.977349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.758 [2024-11-06 14:08:49.977355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.758 [2024-11-06 14:08:49.977369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-11-06 14:08:49.987241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.758 [2024-11-06 14:08:49.987299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.758 [2024-11-06 14:08:49.987312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.758 [2024-11-06 14:08:49.987319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.758 [2024-11-06 14:08:49.987325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.758 [2024-11-06 14:08:49.987339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-11-06 14:08:49.997274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.758 [2024-11-06 14:08:49.997318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.758 [2024-11-06 14:08:49.997335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.758 [2024-11-06 14:08:49.997343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.758 [2024-11-06 14:08:49.997349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.758 [2024-11-06 14:08:49.997363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-11-06 14:08:50.007356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.758 [2024-11-06 14:08:50.007402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.758 [2024-11-06 14:08:50.007418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.758 [2024-11-06 14:08:50.007425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.758 [2024-11-06 14:08:50.007432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.758 [2024-11-06 14:08:50.007447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-11-06 14:08:50.017400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.758 [2024-11-06 14:08:50.017477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.758 [2024-11-06 14:08:50.017491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.758 [2024-11-06 14:08:50.017498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.758 [2024-11-06 14:08:50.017505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.758 [2024-11-06 14:08:50.017519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-11-06 14:08:50.027396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.758 [2024-11-06 14:08:50.027451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.758 [2024-11-06 14:08:50.027465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.758 [2024-11-06 14:08:50.027473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.758 [2024-11-06 14:08:50.027479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.758 [2024-11-06 14:08:50.027494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-11-06 14:08:50.037415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.758 [2024-11-06 14:08:50.037465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.758 [2024-11-06 14:08:50.037478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.758 [2024-11-06 14:08:50.037490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.758 [2024-11-06 14:08:50.037497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:10.758 [2024-11-06 14:08:50.037511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.758 qpair failed and we were unable to recover it. 00:25:11.019 [2024-11-06 14:08:50.047416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.019 [2024-11-06 14:08:50.047470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.019 [2024-11-06 14:08:50.047483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.019 [2024-11-06 14:08:50.047490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.019 [2024-11-06 14:08:50.047497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.019 [2024-11-06 14:08:50.047511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.019 qpair failed and we were unable to recover it. 00:25:11.019 [2024-11-06 14:08:50.057483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.019 [2024-11-06 14:08:50.057556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.019 [2024-11-06 14:08:50.057572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.019 [2024-11-06 14:08:50.057579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.019 [2024-11-06 14:08:50.057585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.019 [2024-11-06 14:08:50.057600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.019 qpair failed and we were unable to recover it. 00:25:11.019 [2024-11-06 14:08:50.067480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.019 [2024-11-06 14:08:50.067564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.019 [2024-11-06 14:08:50.067577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.019 [2024-11-06 14:08:50.067584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.019 [2024-11-06 14:08:50.067590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.019 [2024-11-06 14:08:50.067604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.019 qpair failed and we were unable to recover it. 00:25:11.019 [2024-11-06 14:08:50.077498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.019 [2024-11-06 14:08:50.077573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.019 [2024-11-06 14:08:50.077588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.019 [2024-11-06 14:08:50.077596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.019 [2024-11-06 14:08:50.077603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.019 [2024-11-06 14:08:50.077617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.019 qpair failed and we were unable to recover it. 00:25:11.019 [2024-11-06 14:08:50.087542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.019 [2024-11-06 14:08:50.087585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.019 [2024-11-06 14:08:50.087599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.019 [2024-11-06 14:08:50.087606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.019 [2024-11-06 14:08:50.087613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.019 [2024-11-06 14:08:50.087626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.019 qpair failed and we were unable to recover it. 00:25:11.019 [2024-11-06 14:08:50.097548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.019 [2024-11-06 14:08:50.097635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.019 [2024-11-06 14:08:50.097648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.019 [2024-11-06 14:08:50.097655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.019 [2024-11-06 14:08:50.097662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.019 [2024-11-06 14:08:50.097675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.019 qpair failed and we were unable to recover it. 00:25:11.019 [2024-11-06 14:08:50.107573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.019 [2024-11-06 14:08:50.107627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.019 [2024-11-06 14:08:50.107641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.019 [2024-11-06 14:08:50.107648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.019 [2024-11-06 14:08:50.107655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.019 [2024-11-06 14:08:50.107668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.019 qpair failed and we were unable to recover it. 00:25:11.019 [2024-11-06 14:08:50.117598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.020 [2024-11-06 14:08:50.117645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.020 [2024-11-06 14:08:50.117658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.020 [2024-11-06 14:08:50.117665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.020 [2024-11-06 14:08:50.117672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.020 [2024-11-06 14:08:50.117685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.020 qpair failed and we were unable to recover it. 00:25:11.020 [2024-11-06 14:08:50.127661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.020 [2024-11-06 14:08:50.127704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.020 [2024-11-06 14:08:50.127720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.020 [2024-11-06 14:08:50.127727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.020 [2024-11-06 14:08:50.127734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.020 [2024-11-06 14:08:50.127747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.020 qpair failed and we were unable to recover it. 00:25:11.020 [2024-11-06 14:08:50.137655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.020 [2024-11-06 14:08:50.137701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.020 [2024-11-06 14:08:50.137715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.020 [2024-11-06 14:08:50.137722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.020 [2024-11-06 14:08:50.137729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.020 [2024-11-06 14:08:50.137742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.020 qpair failed and we were unable to recover it. 00:25:11.020 [2024-11-06 14:08:50.147682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.020 [2024-11-06 14:08:50.147728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.020 [2024-11-06 14:08:50.147741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.020 [2024-11-06 14:08:50.147748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.020 [2024-11-06 14:08:50.147754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.020 [2024-11-06 14:08:50.147768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.020 qpair failed and we were unable to recover it. 00:25:11.020 [2024-11-06 14:08:50.157702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.020 [2024-11-06 14:08:50.157749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.020 [2024-11-06 14:08:50.157762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.020 [2024-11-06 14:08:50.157769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.020 [2024-11-06 14:08:50.157775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.020 [2024-11-06 14:08:50.157789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.020 qpair failed and we were unable to recover it. 00:25:11.020 [2024-11-06 14:08:50.167757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.020 [2024-11-06 14:08:50.167799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.020 [2024-11-06 14:08:50.167812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.020 [2024-11-06 14:08:50.167822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.020 [2024-11-06 14:08:50.167829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.020 [2024-11-06 14:08:50.167842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.020 qpair failed and we were unable to recover it. 00:25:11.020 [2024-11-06 14:08:50.177744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.020 [2024-11-06 14:08:50.177799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.020 [2024-11-06 14:08:50.177812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.020 [2024-11-06 14:08:50.177819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.020 [2024-11-06 14:08:50.177826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.020 [2024-11-06 14:08:50.177839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.020 qpair failed and we were unable to recover it. 00:25:11.020 [2024-11-06 14:08:50.187794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.020 [2024-11-06 14:08:50.187861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.020 [2024-11-06 14:08:50.187874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.020 [2024-11-06 14:08:50.187882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.020 [2024-11-06 14:08:50.187889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.020 [2024-11-06 14:08:50.187902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.020 qpair failed and we were unable to recover it. 00:25:11.020 [2024-11-06 14:08:50.197672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.020 [2024-11-06 14:08:50.197719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.020 [2024-11-06 14:08:50.197732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.020 [2024-11-06 14:08:50.197740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.020 [2024-11-06 14:08:50.197746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.020 [2024-11-06 14:08:50.197760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.020 qpair failed and we were unable to recover it. 00:25:11.020 [2024-11-06 14:08:50.207812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.020 [2024-11-06 14:08:50.207864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.020 [2024-11-06 14:08:50.207878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.020 [2024-11-06 14:08:50.207885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.020 [2024-11-06 14:08:50.207891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.020 [2024-11-06 14:08:50.207905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.020 qpair failed and we were unable to recover it. 00:25:11.020 [2024-11-06 14:08:50.217845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.020 [2024-11-06 14:08:50.217891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.020 [2024-11-06 14:08:50.217904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.020 [2024-11-06 14:08:50.217911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.020 [2024-11-06 14:08:50.217918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.020 [2024-11-06 14:08:50.217931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.020 qpair failed and we were unable to recover it. 00:25:11.020 [2024-11-06 14:08:50.227904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.020 [2024-11-06 14:08:50.227959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.020 [2024-11-06 14:08:50.227972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.020 [2024-11-06 14:08:50.227980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.020 [2024-11-06 14:08:50.227986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.020 [2024-11-06 14:08:50.228000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.020 qpair failed and we were unable to recover it. 00:25:11.020 [2024-11-06 14:08:50.237786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.020 [2024-11-06 14:08:50.237830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.020 [2024-11-06 14:08:50.237845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.021 [2024-11-06 14:08:50.237852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.021 [2024-11-06 14:08:50.237859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.021 [2024-11-06 14:08:50.237873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.021 qpair failed and we were unable to recover it. 00:25:11.021 [2024-11-06 14:08:50.247931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.021 [2024-11-06 14:08:50.247975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.021 [2024-11-06 14:08:50.247988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.021 [2024-11-06 14:08:50.247995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.021 [2024-11-06 14:08:50.248002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.021 [2024-11-06 14:08:50.248015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.021 qpair failed and we were unable to recover it. 00:25:11.021 [2024-11-06 14:08:50.257945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.021 [2024-11-06 14:08:50.258002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.021 [2024-11-06 14:08:50.258016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.021 [2024-11-06 14:08:50.258023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.021 [2024-11-06 14:08:50.258029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.021 [2024-11-06 14:08:50.258043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.021 qpair failed and we were unable to recover it. 00:25:11.021 [2024-11-06 14:08:50.268003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.021 [2024-11-06 14:08:50.268062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.021 [2024-11-06 14:08:50.268075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.021 [2024-11-06 14:08:50.268082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.021 [2024-11-06 14:08:50.268088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.021 [2024-11-06 14:08:50.268102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.021 qpair failed and we were unable to recover it. 00:25:11.021 [2024-11-06 14:08:50.278076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.021 [2024-11-06 14:08:50.278122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.021 [2024-11-06 14:08:50.278136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.021 [2024-11-06 14:08:50.278143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.021 [2024-11-06 14:08:50.278149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.021 [2024-11-06 14:08:50.278163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.021 qpair failed and we were unable to recover it. 00:25:11.021 [2024-11-06 14:08:50.287923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.021 [2024-11-06 14:08:50.287973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.021 [2024-11-06 14:08:50.287988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.021 [2024-11-06 14:08:50.287995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.021 [2024-11-06 14:08:50.288001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.021 [2024-11-06 14:08:50.288015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.021 qpair failed and we were unable to recover it. 00:25:11.021 [2024-11-06 14:08:50.297973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.021 [2024-11-06 14:08:50.298028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.021 [2024-11-06 14:08:50.298042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.021 [2024-11-06 14:08:50.298052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.021 [2024-11-06 14:08:50.298059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.021 [2024-11-06 14:08:50.298073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.021 qpair failed and we were unable to recover it. 00:25:11.282 [2024-11-06 14:08:50.308092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.282 [2024-11-06 14:08:50.308140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.282 [2024-11-06 14:08:50.308153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.282 [2024-11-06 14:08:50.308160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.282 [2024-11-06 14:08:50.308167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.282 [2024-11-06 14:08:50.308180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.282 qpair failed and we were unable to recover it. 00:25:11.282 [2024-11-06 14:08:50.318128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.282 [2024-11-06 14:08:50.318169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.282 [2024-11-06 14:08:50.318183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.282 [2024-11-06 14:08:50.318190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.282 [2024-11-06 14:08:50.318196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.282 [2024-11-06 14:08:50.318210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.282 qpair failed and we were unable to recover it. 00:25:11.282 [2024-11-06 14:08:50.328118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.282 [2024-11-06 14:08:50.328167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.282 [2024-11-06 14:08:50.328180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.282 [2024-11-06 14:08:50.328187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.282 [2024-11-06 14:08:50.328194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.282 [2024-11-06 14:08:50.328207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.282 qpair failed and we were unable to recover it. 00:25:11.282 [2024-11-06 14:08:50.338174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.282 [2024-11-06 14:08:50.338228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.282 [2024-11-06 14:08:50.338242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.282 [2024-11-06 14:08:50.338252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.282 [2024-11-06 14:08:50.338259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.282 [2024-11-06 14:08:50.338273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.282 qpair failed and we were unable to recover it. 00:25:11.282 [2024-11-06 14:08:50.348129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.282 [2024-11-06 14:08:50.348177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.282 [2024-11-06 14:08:50.348191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.282 [2024-11-06 14:08:50.348198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.282 [2024-11-06 14:08:50.348204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.282 [2024-11-06 14:08:50.348217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.282 qpair failed and we were unable to recover it. 00:25:11.282 [2024-11-06 14:08:50.358239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.283 [2024-11-06 14:08:50.358326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.283 [2024-11-06 14:08:50.358341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.283 [2024-11-06 14:08:50.358348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.283 [2024-11-06 14:08:50.358354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.283 [2024-11-06 14:08:50.358368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.283 qpair failed and we were unable to recover it. 00:25:11.283 [2024-11-06 14:08:50.368251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.283 [2024-11-06 14:08:50.368299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.283 [2024-11-06 14:08:50.368312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.283 [2024-11-06 14:08:50.368319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.283 [2024-11-06 14:08:50.368326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.283 [2024-11-06 14:08:50.368339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.283 qpair failed and we were unable to recover it. 00:25:11.283 [2024-11-06 14:08:50.378177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.283 [2024-11-06 14:08:50.378223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.283 [2024-11-06 14:08:50.378236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.283 [2024-11-06 14:08:50.378247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.283 [2024-11-06 14:08:50.378253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.283 [2024-11-06 14:08:50.378267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.283 qpair failed and we were unable to recover it. 00:25:11.283 [2024-11-06 14:08:50.388320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.283 [2024-11-06 14:08:50.388369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.283 [2024-11-06 14:08:50.388382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.283 [2024-11-06 14:08:50.388389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.283 [2024-11-06 14:08:50.388396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.283 [2024-11-06 14:08:50.388409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.283 qpair failed and we were unable to recover it. 00:25:11.283 [2024-11-06 14:08:50.398345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.283 [2024-11-06 14:08:50.398395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.283 [2024-11-06 14:08:50.398411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.283 [2024-11-06 14:08:50.398418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.283 [2024-11-06 14:08:50.398424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.283 [2024-11-06 14:08:50.398439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.283 qpair failed and we were unable to recover it. 00:25:11.283 [2024-11-06 14:08:50.408413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.283 [2024-11-06 14:08:50.408496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.283 [2024-11-06 14:08:50.408510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.283 [2024-11-06 14:08:50.408516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.283 [2024-11-06 14:08:50.408523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.283 [2024-11-06 14:08:50.408537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.283 qpair failed and we were unable to recover it. 00:25:11.283 [2024-11-06 14:08:50.418403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.283 [2024-11-06 14:08:50.418494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.283 [2024-11-06 14:08:50.418507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.283 [2024-11-06 14:08:50.418514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.283 [2024-11-06 14:08:50.418521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.283 [2024-11-06 14:08:50.418535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.283 qpair failed and we were unable to recover it. 00:25:11.283 [2024-11-06 14:08:50.428427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.283 [2024-11-06 14:08:50.428513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.283 [2024-11-06 14:08:50.428526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.283 [2024-11-06 14:08:50.428537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.283 [2024-11-06 14:08:50.428544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.283 [2024-11-06 14:08:50.428558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.283 qpair failed and we were unable to recover it. 00:25:11.283 [2024-11-06 14:08:50.438474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.283 [2024-11-06 14:08:50.438514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.283 [2024-11-06 14:08:50.438527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.283 [2024-11-06 14:08:50.438534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.283 [2024-11-06 14:08:50.438541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.283 [2024-11-06 14:08:50.438554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.283 qpair failed and we were unable to recover it. 00:25:11.283 [2024-11-06 14:08:50.448500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.283 [2024-11-06 14:08:50.448562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.283 [2024-11-06 14:08:50.448575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.283 [2024-11-06 14:08:50.448582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.283 [2024-11-06 14:08:50.448588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.283 [2024-11-06 14:08:50.448602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.283 qpair failed and we were unable to recover it. 00:25:11.283 [2024-11-06 14:08:50.458510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.283 [2024-11-06 14:08:50.458596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.283 [2024-11-06 14:08:50.458609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.283 [2024-11-06 14:08:50.458616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.283 [2024-11-06 14:08:50.458623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.283 [2024-11-06 14:08:50.458637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.283 qpair failed and we were unable to recover it. 00:25:11.283 [2024-11-06 14:08:50.468404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.283 [2024-11-06 14:08:50.468452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.283 [2024-11-06 14:08:50.468465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.283 [2024-11-06 14:08:50.468473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.284 [2024-11-06 14:08:50.468480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.284 [2024-11-06 14:08:50.468493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.284 qpair failed and we were unable to recover it. 00:25:11.284 [2024-11-06 14:08:50.478470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.284 [2024-11-06 14:08:50.478553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.284 [2024-11-06 14:08:50.478567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.284 [2024-11-06 14:08:50.478574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.284 [2024-11-06 14:08:50.478580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.284 [2024-11-06 14:08:50.478594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.284 qpair failed and we were unable to recover it. 00:25:11.284 [2024-11-06 14:08:50.488597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.284 [2024-11-06 14:08:50.488640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.284 [2024-11-06 14:08:50.488654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.284 [2024-11-06 14:08:50.488661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.284 [2024-11-06 14:08:50.488668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.284 [2024-11-06 14:08:50.488681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.284 qpair failed and we were unable to recover it. 00:25:11.284 [2024-11-06 14:08:50.498584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.284 [2024-11-06 14:08:50.498633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.284 [2024-11-06 14:08:50.498646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.284 [2024-11-06 14:08:50.498653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.284 [2024-11-06 14:08:50.498660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.284 [2024-11-06 14:08:50.498673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.284 qpair failed and we were unable to recover it. 00:25:11.284 [2024-11-06 14:08:50.508647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.284 [2024-11-06 14:08:50.508717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.284 [2024-11-06 14:08:50.508731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.284 [2024-11-06 14:08:50.508738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.284 [2024-11-06 14:08:50.508744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.284 [2024-11-06 14:08:50.508757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.284 qpair failed and we were unable to recover it. 00:25:11.284 [2024-11-06 14:08:50.518639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.284 [2024-11-06 14:08:50.518683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.284 [2024-11-06 14:08:50.518696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.284 [2024-11-06 14:08:50.518703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.284 [2024-11-06 14:08:50.518710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.284 [2024-11-06 14:08:50.518723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.284 qpair failed and we were unable to recover it. 00:25:11.284 [2024-11-06 14:08:50.528556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.284 [2024-11-06 14:08:50.528635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.284 [2024-11-06 14:08:50.528648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.284 [2024-11-06 14:08:50.528655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.284 [2024-11-06 14:08:50.528661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.284 [2024-11-06 14:08:50.528674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.284 qpair failed and we were unable to recover it. 00:25:11.284 [2024-11-06 14:08:50.538587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.284 [2024-11-06 14:08:50.538634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.284 [2024-11-06 14:08:50.538647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.284 [2024-11-06 14:08:50.538654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.284 [2024-11-06 14:08:50.538661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.284 [2024-11-06 14:08:50.538674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.284 qpair failed and we were unable to recover it. 00:25:11.284 [2024-11-06 14:08:50.548753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.284 [2024-11-06 14:08:50.548800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.284 [2024-11-06 14:08:50.548813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.284 [2024-11-06 14:08:50.548820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.284 [2024-11-06 14:08:50.548826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.284 [2024-11-06 14:08:50.548840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.284 qpair failed and we were unable to recover it. 00:25:11.284 [2024-11-06 14:08:50.558766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.284 [2024-11-06 14:08:50.558817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.284 [2024-11-06 14:08:50.558831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.284 [2024-11-06 14:08:50.558841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.284 [2024-11-06 14:08:50.558847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.284 [2024-11-06 14:08:50.558861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.284 qpair failed and we were unable to recover it. 00:25:11.545 [2024-11-06 14:08:50.568802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.545 [2024-11-06 14:08:50.568856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.545 [2024-11-06 14:08:50.568869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.545 [2024-11-06 14:08:50.568877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.545 [2024-11-06 14:08:50.568883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.545 [2024-11-06 14:08:50.568897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-11-06 14:08:50.578819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.545 [2024-11-06 14:08:50.578865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.545 [2024-11-06 14:08:50.578878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.545 [2024-11-06 14:08:50.578885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.545 [2024-11-06 14:08:50.578892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.545 [2024-11-06 14:08:50.578905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-11-06 14:08:50.588854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.545 [2024-11-06 14:08:50.588903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.545 [2024-11-06 14:08:50.588916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.545 [2024-11-06 14:08:50.588923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.545 [2024-11-06 14:08:50.588930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.545 [2024-11-06 14:08:50.588943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-11-06 14:08:50.598787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.545 [2024-11-06 14:08:50.598829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.545 [2024-11-06 14:08:50.598842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.545 [2024-11-06 14:08:50.598849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.545 [2024-11-06 14:08:50.598856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x226b490 00:25:11.545 [2024-11-06 14:08:50.598873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-11-06 14:08:50.608873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.545 [2024-11-06 14:08:50.608933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.545 [2024-11-06 14:08:50.608952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.545 [2024-11-06 14:08:50.608958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.545 [2024-11-06 14:08:50.608964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe314000b90 00:25:11.545 [2024-11-06 14:08:50.608979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-11-06 14:08:50.618882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.545 [2024-11-06 14:08:50.618924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.545 [2024-11-06 14:08:50.618936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.545 [2024-11-06 14:08:50.618941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.545 [2024-11-06 14:08:50.618946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe314000b90 00:25:11.545 [2024-11-06 14:08:50.618957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:11.545 qpair failed and we were unable to recover it. 00:25:11.545 [2024-11-06 14:08:50.619090] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:25:11.545 A controller has encountered a failure and is being reset. 00:25:11.545 [2024-11-06 14:08:50.619197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2268020 (9): Bad file descriptor 00:25:11.545 Controller properly reset. 00:25:11.545 Initializing NVMe Controllers 00:25:11.545 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:11.545 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:11.545 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:25:11.545 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:25:11.545 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:25:11.545 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:25:11.545 Initialization complete. Launching workers. 00:25:11.545 Starting thread on core 1 00:25:11.545 Starting thread on core 2 00:25:11.545 Starting thread on core 3 00:25:11.545 Starting thread on core 0 00:25:11.545 14:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:25:11.545 00:25:11.545 real 0m11.485s 00:25:11.545 user 0m21.520s 00:25:11.545 sys 0m3.393s 00:25:11.545 14:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:11.545 14:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:11.545 ************************************ 00:25:11.545 END TEST nvmf_target_disconnect_tc2 00:25:11.545 ************************************ 00:25:11.545 14:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:25:11.545 14:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:25:11.545 14:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:25:11.545 14:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:11.545 14:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:25:11.545 14:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:11.545 14:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:25:11.545 14:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:11.545 14:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:11.545 rmmod nvme_tcp 00:25:11.545 rmmod nvme_fabrics 00:25:11.805 rmmod nvme_keyring 00:25:11.805 14:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:11.805 14:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:25:11.805 14:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:25:11.805 14:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1046577 ']' 00:25:11.805 14:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1046577 00:25:11.805 14:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 1046577 ']' 00:25:11.805 14:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 1046577 00:25:11.805 14:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:25:11.805 14:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:11.805 14:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1046577 00:25:11.805 14:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:25:11.805 14:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:25:11.805 14:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1046577' 00:25:11.805 killing process with pid 1046577 00:25:11.805 14:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 1046577 00:25:11.805 14:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 1046577 00:25:11.805 14:08:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:11.805 14:08:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:11.805 14:08:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:11.805 14:08:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:25:11.805 14:08:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:25:11.805 14:08:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:11.805 14:08:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:25:11.805 14:08:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:11.805 14:08:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:11.805 14:08:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.805 14:08:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:11.805 14:08:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.344 14:08:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:14.344 00:25:14.344 real 0m19.394s 00:25:14.344 user 0m49.428s 00:25:14.344 sys 0m7.700s 00:25:14.344 14:08:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:14.344 14:08:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:14.344 ************************************ 00:25:14.344 END TEST nvmf_target_disconnect 00:25:14.344 ************************************ 00:25:14.344 14:08:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:25:14.344 00:25:14.344 real 5m30.550s 00:25:14.344 user 10m19.770s 00:25:14.344 sys 1m39.307s 00:25:14.344 14:08:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:14.344 14:08:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.344 ************************************ 00:25:14.344 END TEST nvmf_host 00:25:14.344 ************************************ 00:25:14.344 14:08:53 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:25:14.344 14:08:53 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:25:14.344 14:08:53 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:25:14.344 14:08:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:25:14.344 14:08:53 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:14.344 14:08:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:14.344 ************************************ 00:25:14.344 START TEST nvmf_target_core_interrupt_mode 00:25:14.344 ************************************ 00:25:14.344 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:25:14.344 * Looking for test storage... 00:25:14.344 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:25:14.344 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:14.344 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:25:14.344 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:14.344 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:14.344 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:14.344 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:14.344 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:14.344 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:25:14.344 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:25:14.344 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:25:14.344 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:25:14.344 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:25:14.344 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:25:14.344 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:25:14.344 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:14.344 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:25:14.344 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:25:14.344 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:14.344 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:14.344 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:25:14.344 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:14.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.345 --rc genhtml_branch_coverage=1 00:25:14.345 --rc genhtml_function_coverage=1 00:25:14.345 --rc genhtml_legend=1 00:25:14.345 --rc geninfo_all_blocks=1 00:25:14.345 --rc geninfo_unexecuted_blocks=1 00:25:14.345 00:25:14.345 ' 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:14.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.345 --rc genhtml_branch_coverage=1 00:25:14.345 --rc genhtml_function_coverage=1 00:25:14.345 --rc genhtml_legend=1 00:25:14.345 --rc geninfo_all_blocks=1 00:25:14.345 --rc geninfo_unexecuted_blocks=1 00:25:14.345 00:25:14.345 ' 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:14.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.345 --rc genhtml_branch_coverage=1 00:25:14.345 --rc genhtml_function_coverage=1 00:25:14.345 --rc genhtml_legend=1 00:25:14.345 --rc geninfo_all_blocks=1 00:25:14.345 --rc geninfo_unexecuted_blocks=1 00:25:14.345 00:25:14.345 ' 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:14.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.345 --rc genhtml_branch_coverage=1 00:25:14.345 --rc genhtml_function_coverage=1 00:25:14.345 --rc genhtml_legend=1 00:25:14.345 --rc geninfo_all_blocks=1 00:25:14.345 --rc geninfo_unexecuted_blocks=1 00:25:14.345 00:25:14.345 ' 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:14.345 ************************************ 00:25:14.345 START TEST nvmf_abort 00:25:14.345 ************************************ 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:25:14.345 * Looking for test storage... 00:25:14.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:25:14.345 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:14.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.346 --rc genhtml_branch_coverage=1 00:25:14.346 --rc genhtml_function_coverage=1 00:25:14.346 --rc genhtml_legend=1 00:25:14.346 --rc geninfo_all_blocks=1 00:25:14.346 --rc geninfo_unexecuted_blocks=1 00:25:14.346 00:25:14.346 ' 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:14.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.346 --rc genhtml_branch_coverage=1 00:25:14.346 --rc genhtml_function_coverage=1 00:25:14.346 --rc genhtml_legend=1 00:25:14.346 --rc geninfo_all_blocks=1 00:25:14.346 --rc geninfo_unexecuted_blocks=1 00:25:14.346 00:25:14.346 ' 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:14.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.346 --rc genhtml_branch_coverage=1 00:25:14.346 --rc genhtml_function_coverage=1 00:25:14.346 --rc genhtml_legend=1 00:25:14.346 --rc geninfo_all_blocks=1 00:25:14.346 --rc geninfo_unexecuted_blocks=1 00:25:14.346 00:25:14.346 ' 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:14.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.346 --rc genhtml_branch_coverage=1 00:25:14.346 --rc genhtml_function_coverage=1 00:25:14.346 --rc genhtml_legend=1 00:25:14.346 --rc geninfo_all_blocks=1 00:25:14.346 --rc geninfo_unexecuted_blocks=1 00:25:14.346 00:25:14.346 ' 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.346 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:14.347 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:14.347 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:25:14.347 14:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:19.630 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:19.630 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:19.630 Found net devices under 0000:31:00.0: cvl_0_0 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:19.630 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:19.631 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:19.631 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.631 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:19.631 Found net devices under 0000:31:00.1: cvl_0_1 00:25:19.631 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.631 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:19.631 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:25:19.631 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:19.631 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:19.631 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:19.631 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:19.631 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:19.631 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:19.631 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:19.631 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:19.631 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:19.631 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:19.631 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:19.631 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:19.631 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:19.631 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:19.631 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:19.631 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:19.631 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:19.631 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:19.631 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:19.631 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:19.631 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:19.631 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:19.891 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:19.891 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:19.891 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:19.891 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:19.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:19.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:25:19.891 00:25:19.891 --- 10.0.0.2 ping statistics --- 00:25:19.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.892 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:25:19.892 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:19.892 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:19.892 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:25:19.892 00:25:19.892 --- 10.0.0.1 ping statistics --- 00:25:19.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.892 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:25:19.892 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:19.892 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:25:19.892 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:19.892 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:19.892 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:19.892 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:19.892 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:19.892 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:19.892 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:19.892 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:25:19.892 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:19.892 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:19.892 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:19.892 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1052547 00:25:19.892 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1052547 00:25:19.892 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 1052547 ']' 00:25:19.892 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:25:19.892 14:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:19.892 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:19.892 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:19.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:19.892 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:19.892 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:19.892 [2024-11-06 14:08:59.039205] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:19.892 [2024-11-06 14:08:59.040352] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:25:19.892 [2024-11-06 14:08:59.040403] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:19.892 [2024-11-06 14:08:59.131049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:20.152 [2024-11-06 14:08:59.183336] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:20.152 [2024-11-06 14:08:59.183388] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:20.152 [2024-11-06 14:08:59.183397] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:20.152 [2024-11-06 14:08:59.183404] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:20.152 [2024-11-06 14:08:59.183410] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:20.152 [2024-11-06 14:08:59.185320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:20.152 [2024-11-06 14:08:59.185386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:20.152 [2024-11-06 14:08:59.185387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:20.152 [2024-11-06 14:08:59.266333] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:20.152 [2024-11-06 14:08:59.267440] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:25:20.152 [2024-11-06 14:08:59.268060] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:20.152 [2024-11-06 14:08:59.268197] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:25:20.720 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:20.720 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:25:20.720 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:20.720 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:20.720 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:20.720 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:20.720 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:25:20.720 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.720 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:20.720 [2024-11-06 14:08:59.854338] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:20.720 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.720 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:25:20.720 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.720 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:20.720 Malloc0 00:25:20.720 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.720 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:25:20.720 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.720 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:20.720 Delay0 00:25:20.720 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.720 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:25:20.720 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.720 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:20.720 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.720 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:25:20.720 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.720 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:20.720 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.720 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:20.720 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.720 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:20.720 [2024-11-06 14:08:59.918141] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:20.720 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.720 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:20.720 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.720 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:20.720 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.721 14:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:25:20.978 [2024-11-06 14:09:00.022006] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:25:22.883 Initializing NVMe Controllers 00:25:22.883 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:25:22.883 controller IO queue size 128 less than required 00:25:22.883 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:25:22.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:25:22.883 Initialization complete. Launching workers. 00:25:22.883 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28835 00:25:22.883 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28892, failed to submit 66 00:25:22.883 success 28835, unsuccessful 57, failed 0 00:25:22.883 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:22.883 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.883 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:22.883 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.883 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:22.883 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:25:22.883 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:22.883 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:25:22.883 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:22.883 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:25:22.883 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:22.883 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:22.883 rmmod nvme_tcp 00:25:22.883 rmmod nvme_fabrics 00:25:22.883 rmmod nvme_keyring 00:25:22.883 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:22.883 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:25:22.883 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:25:22.883 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1052547 ']' 00:25:22.883 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1052547 00:25:22.883 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 1052547 ']' 00:25:22.883 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 1052547 00:25:22.883 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:25:23.143 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:23.143 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1052547 00:25:23.143 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:23.143 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:23.143 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1052547' 00:25:23.143 killing process with pid 1052547 00:25:23.143 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # kill 1052547 00:25:23.143 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@976 -- # wait 1052547 00:25:23.143 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:23.143 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:23.143 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:23.143 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:25:23.143 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:25:23.143 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:23.143 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:25:23.143 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:23.143 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:23.143 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.143 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:23.143 14:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:25.680 00:25:25.680 real 0m11.081s 00:25:25.680 user 0m9.968s 00:25:25.680 sys 0m5.336s 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:25.680 ************************************ 00:25:25.680 END TEST nvmf_abort 00:25:25.680 ************************************ 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:25.680 ************************************ 00:25:25.680 START TEST nvmf_ns_hotplug_stress 00:25:25.680 ************************************ 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:25:25.680 * Looking for test storage... 00:25:25.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:25.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.680 --rc genhtml_branch_coverage=1 00:25:25.680 --rc genhtml_function_coverage=1 00:25:25.680 --rc genhtml_legend=1 00:25:25.680 --rc geninfo_all_blocks=1 00:25:25.680 --rc geninfo_unexecuted_blocks=1 00:25:25.680 00:25:25.680 ' 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:25.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.680 --rc genhtml_branch_coverage=1 00:25:25.680 --rc genhtml_function_coverage=1 00:25:25.680 --rc genhtml_legend=1 00:25:25.680 --rc geninfo_all_blocks=1 00:25:25.680 --rc geninfo_unexecuted_blocks=1 00:25:25.680 00:25:25.680 ' 00:25:25.680 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:25.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.680 --rc genhtml_branch_coverage=1 00:25:25.680 --rc genhtml_function_coverage=1 00:25:25.680 --rc genhtml_legend=1 00:25:25.680 --rc geninfo_all_blocks=1 00:25:25.681 --rc geninfo_unexecuted_blocks=1 00:25:25.681 00:25:25.681 ' 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:25.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.681 --rc genhtml_branch_coverage=1 00:25:25.681 --rc genhtml_function_coverage=1 00:25:25.681 --rc genhtml_legend=1 00:25:25.681 --rc geninfo_all_blocks=1 00:25:25.681 --rc geninfo_unexecuted_blocks=1 00:25:25.681 00:25:25.681 ' 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:25:25.681 14:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:30.958 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:30.958 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:30.958 Found net devices under 0000:31:00.0: cvl_0_0 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:30.958 Found net devices under 0000:31:00.1: cvl_0_1 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:30.958 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:30.959 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:30.959 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:30.959 14:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:30.959 14:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:30.959 14:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:30.959 14:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:30.959 14:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:30.959 14:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:30.959 14:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:30.959 14:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:30.959 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:30.959 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:25:30.959 00:25:30.959 --- 10.0.0.2 ping statistics --- 00:25:30.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.959 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:25:30.959 14:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:30.959 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:30.959 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:25:30.959 00:25:30.959 --- 10.0.0.1 ping statistics --- 00:25:30.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.959 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:25:30.959 14:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:30.959 14:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:25:30.959 14:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:30.959 14:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:30.959 14:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:30.959 14:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:30.959 14:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:30.959 14:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:30.959 14:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:30.959 14:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:25:30.959 14:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:30.959 14:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:30.959 14:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:25:30.959 14:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1058183 00:25:30.959 14:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1058183 00:25:30.959 14:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 1058183 ']' 00:25:30.959 14:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.959 14:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:30.959 14:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.959 14:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:30.959 14:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:25:30.959 14:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:25:30.959 [2024-11-06 14:09:10.200280] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:30.959 [2024-11-06 14:09:10.201442] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:25:30.959 [2024-11-06 14:09:10.201492] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:31.219 [2024-11-06 14:09:10.297569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:31.219 [2024-11-06 14:09:10.349184] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:31.219 [2024-11-06 14:09:10.349238] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:31.219 [2024-11-06 14:09:10.349256] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:31.219 [2024-11-06 14:09:10.349264] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:31.219 [2024-11-06 14:09:10.349270] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:31.219 [2024-11-06 14:09:10.351358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:31.219 [2024-11-06 14:09:10.351573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:31.219 [2024-11-06 14:09:10.351574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:31.219 [2024-11-06 14:09:10.431796] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:31.219 [2024-11-06 14:09:10.432950] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:25:31.219 [2024-11-06 14:09:10.433345] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:31.219 [2024-11-06 14:09:10.433382] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:25:31.812 14:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:31.812 14:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:25:31.812 14:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:31.812 14:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:31.812 14:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:25:31.812 14:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:31.812 14:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:25:31.812 14:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:32.072 [2024-11-06 14:09:11.168466] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:32.072 14:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:25:32.072 14:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:32.333 [2024-11-06 14:09:11.493204] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:32.333 14:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:32.593 14:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:25:32.593 Malloc0 00:25:32.593 14:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:25:32.854 Delay0 00:25:32.854 14:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:33.113 14:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:25:33.113 NULL1 00:25:33.113 14:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:25:33.374 14:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1058597 00:25:33.374 14:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:33.374 14:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:25:33.374 14:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:34.756 Read completed with error (sct=0, sc=11) 00:25:34.756 14:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:34.756 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:34.756 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:34.756 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:34.756 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:34.756 14:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:25:34.756 14:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:25:34.756 true 00:25:35.015 14:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:35.015 14:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:35.953 14:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:35.953 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:35.953 14:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:25:35.953 14:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:25:35.953 true 00:25:35.953 14:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:35.953 14:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:36.213 14:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:36.473 14:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:25:36.473 14:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:25:36.473 true 00:25:36.473 14:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:36.473 14:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:36.732 14:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:36.993 14:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:25:36.993 14:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:25:36.993 true 00:25:36.993 14:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:36.993 14:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:37.252 14:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:37.252 14:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:25:37.252 14:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:25:37.511 true 00:25:37.511 14:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:37.511 14:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:37.771 14:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:37.771 14:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:25:37.771 14:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:25:38.030 true 00:25:38.030 14:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:38.030 14:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:38.030 14:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:38.289 14:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:25:38.289 14:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:25:38.548 true 00:25:38.548 14:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:38.548 14:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:38.548 14:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:38.808 14:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:25:38.808 14:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:25:38.808 true 00:25:38.808 14:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:38.808 14:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:39.747 14:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:40.007 14:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:25:40.007 14:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:25:40.266 true 00:25:40.266 14:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:40.267 14:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:40.267 14:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:40.529 14:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:25:40.529 14:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:25:40.529 true 00:25:40.529 14:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:40.529 14:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:40.789 14:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:41.048 14:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:25:41.048 14:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:25:41.048 true 00:25:41.048 14:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:41.048 14:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:41.307 14:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:41.307 14:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:25:41.307 14:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:25:41.566 true 00:25:41.566 14:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:41.566 14:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:41.825 14:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:41.825 14:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:25:41.825 14:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:25:42.084 true 00:25:42.085 14:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:42.085 14:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:43.025 14:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:43.286 14:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:25:43.286 14:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:25:43.286 true 00:25:43.286 14:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:43.286 14:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:43.545 14:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:43.806 14:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:25:43.806 14:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:25:43.806 true 00:25:43.806 14:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:43.806 14:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:44.065 14:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:44.065 14:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:25:44.065 14:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:25:44.326 true 00:25:44.326 14:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:44.326 14:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:44.586 14:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:44.586 14:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:25:44.586 14:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:25:44.846 true 00:25:44.846 14:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:44.846 14:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:44.846 14:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:45.106 14:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:25:45.106 14:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:25:45.106 true 00:25:45.366 14:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:45.366 14:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:46.303 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:46.303 14:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:46.304 14:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:25:46.304 14:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:25:46.564 true 00:25:46.564 14:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:46.564 14:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:46.564 14:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:46.823 14:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:25:46.824 14:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:25:46.824 true 00:25:47.083 14:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:47.083 14:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:47.083 14:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:47.342 14:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:25:47.343 14:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:25:47.343 true 00:25:47.343 14:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:47.343 14:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:47.602 14:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:47.863 14:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:25:47.863 14:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:25:47.863 true 00:25:47.863 14:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:47.863 14:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:48.123 14:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:48.123 14:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:25:48.123 14:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:25:48.382 true 00:25:48.382 14:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:48.382 14:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:48.641 14:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:48.641 14:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:25:48.641 14:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:25:48.901 true 00:25:48.901 14:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:48.901 14:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:48.901 14:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:49.161 14:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:25:49.161 14:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:25:49.419 true 00:25:49.419 14:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:49.419 14:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:50.356 14:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:50.356 14:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:25:50.356 14:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:25:50.614 true 00:25:50.614 14:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:50.614 14:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:50.614 14:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:50.873 14:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:25:50.873 14:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:25:51.132 true 00:25:51.132 14:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:51.132 14:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:51.132 14:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:51.391 14:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:25:51.391 14:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:25:51.391 true 00:25:51.392 14:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:51.392 14:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:51.651 14:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:51.910 14:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:25:51.910 14:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:25:51.910 true 00:25:51.910 14:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:51.910 14:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:52.170 14:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:52.170 14:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:25:52.170 14:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:25:52.429 true 00:25:52.429 14:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:52.429 14:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:53.368 14:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:53.627 14:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:25:53.627 14:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:25:53.627 true 00:25:53.627 14:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:53.627 14:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:53.886 14:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:53.886 14:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:25:53.886 14:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:25:54.145 true 00:25:54.145 14:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:54.145 14:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:54.405 14:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:54.405 14:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:25:54.405 14:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:25:54.665 true 00:25:54.665 14:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:54.665 14:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:54.923 14:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:54.923 14:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:25:54.924 14:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:25:55.182 true 00:25:55.182 14:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:55.182 14:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:55.183 14:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:55.442 14:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:25:55.442 14:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:25:55.701 true 00:25:55.701 14:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:55.701 14:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:56.636 14:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:56.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:56.636 14:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:25:56.636 14:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:25:56.895 true 00:25:56.895 14:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:56.895 14:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:56.895 14:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:57.155 14:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:25:57.155 14:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:25:57.155 true 00:25:57.414 14:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:57.414 14:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:57.414 14:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:57.673 14:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:25:57.673 14:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:25:57.673 true 00:25:57.673 14:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:57.673 14:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:57.931 14:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:58.190 14:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:25:58.190 14:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:25:58.190 true 00:25:58.190 14:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:58.190 14:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:58.450 14:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:58.450 14:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:25:58.450 14:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:25:58.709 true 00:25:58.709 14:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:58.710 14:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:58.969 14:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:58.969 14:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:25:58.969 14:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:25:59.228 true 00:25:59.228 14:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:59.228 14:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:59.228 14:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:59.487 14:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:25:59.487 14:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:25:59.746 true 00:25:59.746 14:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:25:59.746 14:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:00.684 14:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:00.684 14:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:26:00.684 14:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:26:00.943 true 00:26:00.943 14:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:26:00.943 14:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:00.943 14:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:01.203 14:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:26:01.203 14:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:26:01.462 true 00:26:01.462 14:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:26:01.462 14:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:01.462 14:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:01.722 14:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:26:01.722 14:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:26:01.722 true 00:26:01.722 14:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:26:01.722 14:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:01.982 14:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:02.241 14:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:26:02.241 14:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:26:02.241 true 00:26:02.241 14:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:26:02.241 14:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:02.536 14:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:02.536 14:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:26:02.536 14:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:26:02.856 true 00:26:02.856 14:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:26:02.856 14:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:03.795 Initializing NVMe Controllers 00:26:03.795 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:03.795 Controller IO queue size 128, less than required. 00:26:03.795 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:03.795 Controller IO queue size 128, less than required. 00:26:03.795 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:03.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:03.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:03.795 Initialization complete. Launching workers. 00:26:03.795 ======================================================== 00:26:03.795 Latency(us) 00:26:03.795 Device Information : IOPS MiB/s Average min max 00:26:03.795 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 280.46 0.14 162185.26 2412.68 1013542.47 00:26:03.795 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10760.65 5.25 11895.11 1842.60 402838.93 00:26:03.795 ======================================================== 00:26:03.795 Total : 11041.11 5.39 15712.74 1842.60 1013542.47 00:26:03.795 00:26:03.795 14:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:04.055 14:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:26:04.055 14:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:26:04.055 true 00:26:04.055 14:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1058597 00:26:04.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1058597) - No such process 00:26:04.055 14:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1058597 00:26:04.055 14:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:04.313 14:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:04.313 14:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:26:04.313 14:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:26:04.313 14:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:26:04.313 14:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:04.313 14:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:26:04.572 null0 00:26:04.572 14:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:04.572 14:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:04.572 14:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:26:04.832 null1 00:26:04.832 14:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:04.832 14:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:04.832 14:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:26:04.832 null2 00:26:04.832 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:04.832 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:04.832 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:26:05.092 null3 00:26:05.092 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:05.092 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:05.092 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:26:05.092 null4 00:26:05.092 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:05.092 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:05.092 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:26:05.351 null5 00:26:05.351 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:05.351 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:05.351 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:26:05.351 null6 00:26:05.351 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:05.351 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:05.351 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:26:05.612 null7 00:26:05.612 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:05.612 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:05.612 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:26:05.612 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:05.612 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:05.612 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:05.612 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:05.612 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:05.612 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:05.612 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:05.612 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:26:05.612 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:26:05.612 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:05.612 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:26:05.612 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:05.612 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:05.612 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:05.612 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:05.612 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:05.612 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:26:05.612 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:05.612 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:05.612 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:05.612 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:05.612 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:05.612 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:05.612 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:26:05.612 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:05.612 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:05.612 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1065706 1065707 1065708 1065709 1065712 1065713 1065715 1065718 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:05.613 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:05.873 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:05.873 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:05.873 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:05.873 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:05.873 14:09:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:05.873 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:05.873 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:05.873 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:05.873 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:05.873 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:05.873 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:05.873 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:05.873 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:05.873 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:05.873 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:05.873 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:05.873 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:05.873 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:05.873 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:05.873 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:06.131 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:06.131 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:06.132 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:06.132 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:06.132 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:06.132 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:06.132 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:06.132 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:06.132 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:06.132 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:06.132 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:06.132 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:06.132 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:06.132 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:06.132 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:06.132 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:06.132 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:06.132 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:06.132 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:06.132 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:06.391 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:06.391 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:06.391 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:06.391 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:06.391 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:06.391 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:06.391 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:06.391 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:06.391 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:06.391 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:06.391 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:06.391 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:06.391 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:06.391 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:06.392 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:06.392 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:06.392 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:06.392 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:06.392 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:06.392 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:06.392 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:06.392 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:06.392 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:06.392 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:06.392 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:06.392 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:06.392 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:06.392 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:06.392 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:06.392 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:06.651 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:06.651 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:06.651 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:06.651 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:06.651 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:06.651 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:06.651 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:06.651 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:06.651 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:06.651 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:06.651 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:06.651 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:06.651 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:06.651 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:06.651 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:06.651 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:06.651 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:06.651 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:06.651 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:06.651 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:06.651 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:06.651 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:06.651 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:06.651 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:06.651 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:06.651 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:06.651 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:06.911 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:06.911 14:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:06.911 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:06.911 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:06.911 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:06.911 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:06.911 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:06.911 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:06.911 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:06.911 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:06.911 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:06.911 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:06.911 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:06.911 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:06.911 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:06.911 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:06.911 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:06.911 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:06.911 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:06.911 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:06.911 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:06.911 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:06.911 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:06.911 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:06.911 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:06.911 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:06.911 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:06.911 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:07.171 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:07.171 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:07.171 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:07.171 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:07.171 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:07.171 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:07.171 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:07.171 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:07.171 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:07.171 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:07.171 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:07.171 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:07.171 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:07.171 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:07.171 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:07.171 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:07.171 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:07.431 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:07.431 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:07.431 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:07.431 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:07.431 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:07.431 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:07.431 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:07.431 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:07.431 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:07.431 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:07.431 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:07.431 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:07.431 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:07.431 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:07.431 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:07.431 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:07.432 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:07.432 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:07.432 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:07.432 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:07.432 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:07.432 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:07.432 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:07.432 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:07.432 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:07.432 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:07.432 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:07.432 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:07.693 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:07.693 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:07.693 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:07.693 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:07.693 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:07.693 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:07.693 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:07.693 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:07.693 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:07.693 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:07.693 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:07.693 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:07.693 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:07.693 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:07.693 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:07.693 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:07.693 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:07.693 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:07.693 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:07.693 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:07.693 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:07.693 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:07.693 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:07.693 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:07.693 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:07.953 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:07.953 14:09:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:07.953 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:07.953 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:07.953 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:07.953 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:07.953 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:07.953 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:07.953 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:07.953 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:07.953 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:07.953 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:07.953 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:07.953 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:07.953 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:07.953 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:07.953 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:07.953 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:07.953 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:07.953 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:07.953 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:07.953 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:07.953 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:07.953 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:07.953 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:07.953 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:07.953 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:07.953 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:08.213 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:08.213 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:08.213 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:08.213 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:08.213 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:08.213 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:08.213 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:08.213 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:08.213 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:08.213 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:08.213 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:08.213 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:08.214 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:08.214 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:08.214 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:08.214 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:08.214 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:08.214 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:08.214 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:08.214 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:08.214 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:08.214 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:08.474 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:08.474 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:08.474 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:08.474 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:08.474 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:08.474 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:08.474 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:08.474 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:08.474 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:08.474 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:08.474 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:08.474 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:08.474 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:08.474 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:08.474 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:08.474 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:08.474 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:08.474 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:08.474 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:08.474 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:08.474 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:08.474 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:08.474 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:08.474 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:08.474 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:08.474 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:08.475 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:08.735 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:08.735 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:08.735 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:08.735 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:08.735 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:08.735 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:08.735 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:08.735 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:08.735 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:08.735 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:08.735 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:08.735 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:08.735 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:08.735 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:08.735 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:08.735 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:08.735 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:08.735 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:08.735 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:08.735 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:08.735 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:08.735 14:09:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:08.735 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:08.735 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:08.995 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:08.995 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:08.995 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:08.995 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:08.995 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:08.995 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:08.995 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:08.995 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:08.995 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:08.995 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:08.995 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:08.995 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:08.995 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:08.995 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:08.995 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:08.995 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:08.995 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:26:08.995 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:26:08.995 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:08.995 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:26:08.996 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:08.996 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:26:08.996 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:08.996 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:08.996 rmmod nvme_tcp 00:26:08.996 rmmod nvme_fabrics 00:26:08.996 rmmod nvme_keyring 00:26:09.255 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:09.255 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:26:09.255 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:26:09.255 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1058183 ']' 00:26:09.255 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1058183 00:26:09.255 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 1058183 ']' 00:26:09.255 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 1058183 00:26:09.255 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:26:09.255 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:09.255 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1058183 00:26:09.255 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:09.255 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:09.255 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1058183' 00:26:09.255 killing process with pid 1058183 00:26:09.255 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 1058183 00:26:09.255 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 1058183 00:26:09.255 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:09.255 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:09.255 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:09.255 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:26:09.255 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:26:09.255 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:09.255 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:26:09.256 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:09.256 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:09.256 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.256 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:09.256 14:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.792 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:11.792 00:26:11.792 real 0m46.064s 00:26:11.792 user 2m55.170s 00:26:11.792 sys 0m17.609s 00:26:11.792 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:11.792 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:11.792 ************************************ 00:26:11.792 END TEST nvmf_ns_hotplug_stress 00:26:11.792 ************************************ 00:26:11.792 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:26:11.792 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:26:11.792 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:11.792 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:11.792 ************************************ 00:26:11.792 START TEST nvmf_delete_subsystem 00:26:11.792 ************************************ 00:26:11.792 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:26:11.792 * Looking for test storage... 00:26:11.792 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:11.792 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:11.792 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:26:11.792 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:11.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.793 --rc genhtml_branch_coverage=1 00:26:11.793 --rc genhtml_function_coverage=1 00:26:11.793 --rc genhtml_legend=1 00:26:11.793 --rc geninfo_all_blocks=1 00:26:11.793 --rc geninfo_unexecuted_blocks=1 00:26:11.793 00:26:11.793 ' 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:11.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.793 --rc genhtml_branch_coverage=1 00:26:11.793 --rc genhtml_function_coverage=1 00:26:11.793 --rc genhtml_legend=1 00:26:11.793 --rc geninfo_all_blocks=1 00:26:11.793 --rc geninfo_unexecuted_blocks=1 00:26:11.793 00:26:11.793 ' 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:11.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.793 --rc genhtml_branch_coverage=1 00:26:11.793 --rc genhtml_function_coverage=1 00:26:11.793 --rc genhtml_legend=1 00:26:11.793 --rc geninfo_all_blocks=1 00:26:11.793 --rc geninfo_unexecuted_blocks=1 00:26:11.793 00:26:11.793 ' 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:11.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.793 --rc genhtml_branch_coverage=1 00:26:11.793 --rc genhtml_function_coverage=1 00:26:11.793 --rc genhtml_legend=1 00:26:11.793 --rc geninfo_all_blocks=1 00:26:11.793 --rc geninfo_unexecuted_blocks=1 00:26:11.793 00:26:11.793 ' 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:11.793 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:11.794 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:11.794 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:11.794 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:11.794 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:26:11.794 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:11.794 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:11.794 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:11.794 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:11.794 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:11.794 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:11.794 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:11.794 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.794 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:11.794 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:11.794 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:26:11.794 14:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:17.068 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:17.068 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:17.068 Found net devices under 0000:31:00.0: cvl_0_0 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.068 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:17.069 Found net devices under 0000:31:00.1: cvl_0_1 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:17.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:17.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.389 ms 00:26:17.069 00:26:17.069 --- 10.0.0.2 ping statistics --- 00:26:17.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.069 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:17.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:17.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:26:17.069 00:26:17.069 --- 10.0.0.1 ping statistics --- 00:26:17.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.069 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1070889 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1070889 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 1070889 ']' 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:17.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:17.069 14:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:26:17.069 [2024-11-06 14:09:55.977374] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:17.069 [2024-11-06 14:09:55.978471] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:26:17.069 [2024-11-06 14:09:55.978519] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:17.069 [2024-11-06 14:09:56.069662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:17.069 [2024-11-06 14:09:56.122349] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:17.069 [2024-11-06 14:09:56.122402] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:17.069 [2024-11-06 14:09:56.122411] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:17.069 [2024-11-06 14:09:56.122418] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:17.069 [2024-11-06 14:09:56.122425] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:17.069 [2024-11-06 14:09:56.124090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.069 [2024-11-06 14:09:56.124095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.069 [2024-11-06 14:09:56.206972] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:17.069 [2024-11-06 14:09:56.207086] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:17.069 [2024-11-06 14:09:56.207235] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:17.638 14:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:17.638 14:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:26:17.638 14:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:17.638 14:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:17.638 14:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:17.638 14:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:17.638 14:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:17.638 14:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.638 14:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:17.638 [2024-11-06 14:09:56.797039] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:17.638 14:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.638 14:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:17.638 14:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.638 14:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:17.638 14:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.638 14:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:17.638 14:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.638 14:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:17.638 [2024-11-06 14:09:56.817232] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:17.638 14:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.638 14:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:26:17.638 14:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.638 14:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:17.638 NULL1 00:26:17.638 14:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.638 14:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:17.638 14:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.638 14:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:17.638 Delay0 00:26:17.638 14:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.638 14:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:17.638 14:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.638 14:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:17.638 14:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.638 14:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1071226 00:26:17.638 14:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:26:17.638 14:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:26:17.638 [2024-11-06 14:09:56.888025] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:20.176 14:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:20.176 14:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.176 14:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:20.176 Read completed with error (sct=0, sc=8) 00:26:20.176 Write completed with error (sct=0, sc=8) 00:26:20.176 Read completed with error (sct=0, sc=8) 00:26:20.176 starting I/O failed: -6 00:26:20.176 Read completed with error (sct=0, sc=8) 00:26:20.176 Read completed with error (sct=0, sc=8) 00:26:20.176 Read completed with error (sct=0, sc=8) 00:26:20.176 Write completed with error (sct=0, sc=8) 00:26:20.176 starting I/O failed: -6 00:26:20.176 Read completed with error (sct=0, sc=8) 00:26:20.176 Write completed with error (sct=0, sc=8) 00:26:20.176 Read completed with error (sct=0, sc=8) 00:26:20.176 Read completed with error (sct=0, sc=8) 00:26:20.176 starting I/O failed: -6 00:26:20.176 Read completed with error (sct=0, sc=8) 00:26:20.176 Read completed with error (sct=0, sc=8) 00:26:20.176 Read completed with error (sct=0, sc=8) 00:26:20.176 Read completed with error (sct=0, sc=8) 00:26:20.176 starting I/O failed: -6 00:26:20.176 Read completed with error (sct=0, sc=8) 00:26:20.176 Write completed with error (sct=0, sc=8) 00:26:20.176 Read completed with error (sct=0, sc=8) 00:26:20.176 Read completed with error (sct=0, sc=8) 00:26:20.176 starting I/O failed: -6 00:26:20.176 Write completed with error (sct=0, sc=8) 00:26:20.176 Write completed with error (sct=0, sc=8) 00:26:20.176 Read completed with error (sct=0, sc=8) 00:26:20.176 Read completed with error (sct=0, sc=8) 00:26:20.176 starting I/O failed: -6 00:26:20.176 Read completed with error (sct=0, sc=8) 00:26:20.176 Write completed with error (sct=0, sc=8) 00:26:20.176 Read completed with error (sct=0, sc=8) 00:26:20.176 Read completed with error (sct=0, sc=8) 00:26:20.176 starting I/O failed: -6 00:26:20.176 Read completed with error (sct=0, sc=8) 00:26:20.176 Read completed with error (sct=0, sc=8) 00:26:20.176 Read completed with error (sct=0, sc=8) 00:26:20.176 Read completed with error (sct=0, sc=8) 00:26:20.176 starting I/O failed: -6 00:26:20.176 Read completed with error (sct=0, sc=8) 00:26:20.176 Read completed with error (sct=0, sc=8) 00:26:20.176 Write completed with error (sct=0, sc=8) 00:26:20.176 Read completed with error (sct=0, sc=8) 00:26:20.176 starting I/O failed: -6 00:26:20.176 Read completed with error (sct=0, sc=8) 00:26:20.176 Write completed with error (sct=0, sc=8) 00:26:20.176 Write completed with error (sct=0, sc=8) 00:26:20.176 Write completed with error (sct=0, sc=8) 00:26:20.176 starting I/O failed: -6 00:26:20.176 Write completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 starting I/O failed: -6 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 starting I/O failed: -6 00:26:20.177 [2024-11-06 14:09:58.977741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f407c00d490 is same with the state(6) to be set 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 starting I/O failed: -6 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 starting I/O failed: -6 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 starting I/O failed: -6 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 starting I/O failed: -6 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 starting I/O failed: -6 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 starting I/O failed: -6 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 starting I/O failed: -6 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 starting I/O failed: -6 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 starting I/O failed: -6 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 starting I/O failed: -6 00:26:20.177 [2024-11-06 14:09:58.978708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f52c0 is same with the state(6) to be set 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Write completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.177 Read completed with error (sct=0, sc=8) 00:26:20.745 [2024-11-06 14:09:59.945569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f65e0 is same with the state(6) to be set 00:26:20.745 Write completed with error (sct=0, sc=8) 00:26:20.745 Write completed with error (sct=0, sc=8) 00:26:20.745 Write completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Write completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Write completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Write completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Write completed with error (sct=0, sc=8) 00:26:20.745 Write completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Write completed with error (sct=0, sc=8) 00:26:20.745 [2024-11-06 14:09:59.982351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f407c00d020 is same with the state(6) to be set 00:26:20.745 Write completed with error (sct=0, sc=8) 00:26:20.745 Write completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Write completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Write completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Write completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Write completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Write completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Write completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 [2024-11-06 14:09:59.982447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f407c00d7c0 is same with the state(6) to be set 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Write completed with error (sct=0, sc=8) 00:26:20.745 Write completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Write completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 [2024-11-06 14:09:59.982725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f50e0 is same with the state(6) to be set 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Write completed with error (sct=0, sc=8) 00:26:20.745 Write completed with error (sct=0, sc=8) 00:26:20.745 Write completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Write completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Write completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Write completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Read completed with error (sct=0, sc=8) 00:26:20.745 Write completed with error (sct=0, sc=8) 00:26:20.745 [2024-11-06 14:09:59.982969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f54a0 is same with the state(6) to be set 00:26:20.745 Initializing NVMe Controllers 00:26:20.745 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:20.745 Controller IO queue size 128, less than required. 00:26:20.745 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:20.745 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:26:20.745 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:26:20.745 Initialization complete. Launching workers. 00:26:20.745 ======================================================== 00:26:20.745 Latency(us) 00:26:20.745 Device Information : IOPS MiB/s Average min max 00:26:20.745 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 157.42 0.08 923498.11 190.99 1011548.26 00:26:20.745 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 174.31 0.09 885410.36 236.06 1043447.47 00:26:20.745 ======================================================== 00:26:20.745 Total : 331.73 0.16 903484.94 190.99 1043447.47 00:26:20.745 00:26:20.745 [2024-11-06 14:09:59.983509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f65e0 (9): Bad file descriptor 00:26:20.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:26:20.745 14:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.745 14:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:26:20.745 14:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1071226 00:26:20.745 14:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:26:21.314 14:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:26:21.314 14:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1071226 00:26:21.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1071226) - No such process 00:26:21.314 14:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1071226 00:26:21.314 14:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:26:21.314 14:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1071226 00:26:21.314 14:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:26:21.314 14:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:21.314 14:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:26:21.314 14:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:21.314 14:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1071226 00:26:21.314 14:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:26:21.314 14:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:21.314 14:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:21.314 14:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:21.314 14:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:21.314 14:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.314 14:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:21.314 14:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.314 14:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:21.314 14:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.314 14:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:21.314 [2024-11-06 14:10:00.505081] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:21.314 14:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.314 14:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:21.314 14:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.314 14:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:21.314 14:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.314 14:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1071903 00:26:21.314 14:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:26:21.314 14:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1071903 00:26:21.314 14:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:21.314 14:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:26:21.314 [2024-11-06 14:10:00.557241] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:21.884 14:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:21.884 14:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1071903 00:26:21.884 14:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:22.451 14:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:22.451 14:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1071903 00:26:22.451 14:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:23.019 14:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:23.019 14:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1071903 00:26:23.019 14:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:23.278 14:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:23.278 14:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1071903 00:26:23.278 14:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:23.846 14:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:23.846 14:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1071903 00:26:23.846 14:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:24.414 14:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:24.414 14:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1071903 00:26:24.414 14:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:24.673 Initializing NVMe Controllers 00:26:24.673 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:24.673 Controller IO queue size 128, less than required. 00:26:24.673 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:24.673 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:26:24.673 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:26:24.673 Initialization complete. Launching workers. 00:26:24.673 ======================================================== 00:26:24.673 Latency(us) 00:26:24.673 Device Information : IOPS MiB/s Average min max 00:26:24.673 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002220.09 1000241.31 1007306.83 00:26:24.673 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003766.71 1000339.43 1008862.41 00:26:24.673 ======================================================== 00:26:24.673 Total : 256.00 0.12 1002993.40 1000241.31 1008862.41 00:26:24.673 00:26:24.932 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:24.932 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1071903 00:26:24.932 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1071903) - No such process 00:26:24.932 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1071903 00:26:24.932 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:26:24.932 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:26:24.932 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:24.932 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:26:24.932 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:24.932 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:26:24.932 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:24.932 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:24.932 rmmod nvme_tcp 00:26:24.932 rmmod nvme_fabrics 00:26:24.932 rmmod nvme_keyring 00:26:24.932 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:24.932 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:26:24.932 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:26:24.932 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1070889 ']' 00:26:24.932 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1070889 00:26:24.932 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 1070889 ']' 00:26:24.932 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 1070889 00:26:24.932 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:26:24.932 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:24.932 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1070889 00:26:24.932 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:24.932 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:24.932 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1070889' 00:26:24.932 killing process with pid 1070889 00:26:24.932 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 1070889 00:26:24.932 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 1070889 00:26:25.191 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:25.191 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:25.191 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:25.191 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:26:25.191 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:26:25.191 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:25.191 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:26:25.191 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:25.191 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:25.191 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.191 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:25.191 14:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:27.101 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:27.101 00:26:27.101 real 0m15.739s 00:26:27.101 user 0m25.362s 00:26:27.101 sys 0m5.722s 00:26:27.101 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:27.101 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:27.101 ************************************ 00:26:27.101 END TEST nvmf_delete_subsystem 00:26:27.101 ************************************ 00:26:27.101 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:26:27.101 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:26:27.101 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:27.101 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:27.101 ************************************ 00:26:27.101 START TEST nvmf_host_management 00:26:27.101 ************************************ 00:26:27.101 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:26:27.101 * Looking for test storage... 00:26:27.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:27.360 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:27.360 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:26:27.360 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:27.360 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:27.360 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:27.360 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:27.360 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:27.360 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:26:27.360 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:26:27.360 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:26:27.360 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:26:27.360 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:26:27.360 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:26:27.360 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:26:27.360 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:27.360 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:26:27.360 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:26:27.360 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:27.360 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:27.360 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:26:27.360 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:26:27.360 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:27.360 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:26:27.360 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:26:27.360 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:26:27.360 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:26:27.360 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:27.360 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:26:27.360 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:26:27.360 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:27.360 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:27.360 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:26:27.360 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:27.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.361 --rc genhtml_branch_coverage=1 00:26:27.361 --rc genhtml_function_coverage=1 00:26:27.361 --rc genhtml_legend=1 00:26:27.361 --rc geninfo_all_blocks=1 00:26:27.361 --rc geninfo_unexecuted_blocks=1 00:26:27.361 00:26:27.361 ' 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:27.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.361 --rc genhtml_branch_coverage=1 00:26:27.361 --rc genhtml_function_coverage=1 00:26:27.361 --rc genhtml_legend=1 00:26:27.361 --rc geninfo_all_blocks=1 00:26:27.361 --rc geninfo_unexecuted_blocks=1 00:26:27.361 00:26:27.361 ' 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:27.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.361 --rc genhtml_branch_coverage=1 00:26:27.361 --rc genhtml_function_coverage=1 00:26:27.361 --rc genhtml_legend=1 00:26:27.361 --rc geninfo_all_blocks=1 00:26:27.361 --rc geninfo_unexecuted_blocks=1 00:26:27.361 00:26:27.361 ' 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:27.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.361 --rc genhtml_branch_coverage=1 00:26:27.361 --rc genhtml_function_coverage=1 00:26:27.361 --rc genhtml_legend=1 00:26:27.361 --rc geninfo_all_blocks=1 00:26:27.361 --rc geninfo_unexecuted_blocks=1 00:26:27.361 00:26:27.361 ' 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:26:27.361 14:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:32.628 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:32.628 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:32.628 Found net devices under 0000:31:00.0: cvl_0_0 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.628 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:32.629 Found net devices under 0000:31:00.1: cvl_0_1 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:32.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:32.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.578 ms 00:26:32.629 00:26:32.629 --- 10.0.0.2 ping statistics --- 00:26:32.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.629 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:32.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:32.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:26:32.629 00:26:32.629 --- 10.0.0.1 ping statistics --- 00:26:32.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.629 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1077091 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1077091 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 1077091 ']' 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:32.629 14:10:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:26:32.629 [2024-11-06 14:10:11.844769] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:32.629 [2024-11-06 14:10:11.845912] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:26:32.629 [2024-11-06 14:10:11.845963] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:32.889 [2024-11-06 14:10:11.928381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:32.889 [2024-11-06 14:10:11.967218] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:32.889 [2024-11-06 14:10:11.967266] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:32.889 [2024-11-06 14:10:11.967273] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:32.889 [2024-11-06 14:10:11.967278] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:32.889 [2024-11-06 14:10:11.967282] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:32.889 [2024-11-06 14:10:11.968709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:32.889 [2024-11-06 14:10:11.968866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:32.889 [2024-11-06 14:10:11.968882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:32.889 [2024-11-06 14:10:11.968888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.889 [2024-11-06 14:10:12.023920] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:32.889 [2024-11-06 14:10:12.024765] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:26:32.889 [2024-11-06 14:10:12.024994] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:32.889 [2024-11-06 14:10:12.025131] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:32.889 [2024-11-06 14:10:12.025140] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:33.455 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:33.455 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:26:33.455 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:33.455 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:33.455 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:33.455 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:33.455 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:33.455 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.455 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:33.455 [2024-11-06 14:10:12.661815] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:33.455 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.455 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:26:33.455 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:33.455 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:33.455 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:33.455 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:26:33.455 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:26:33.455 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.455 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:33.455 Malloc0 00:26:33.455 [2024-11-06 14:10:12.729582] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:33.713 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.713 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:26:33.713 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:33.713 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:33.713 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1077283 00:26:33.713 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1077283 /var/tmp/bdevperf.sock 00:26:33.713 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 1077283 ']' 00:26:33.713 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:33.713 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:33.713 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:33.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:33.713 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:33.713 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:33.713 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:33.713 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:26:33.713 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:26:33.713 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:26:33.713 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:33.713 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:33.713 { 00:26:33.713 "params": { 00:26:33.713 "name": "Nvme$subsystem", 00:26:33.713 "trtype": "$TEST_TRANSPORT", 00:26:33.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:33.713 "adrfam": "ipv4", 00:26:33.713 "trsvcid": "$NVMF_PORT", 00:26:33.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:33.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:33.713 "hdgst": ${hdgst:-false}, 00:26:33.713 "ddgst": ${ddgst:-false} 00:26:33.713 }, 00:26:33.713 "method": "bdev_nvme_attach_controller" 00:26:33.713 } 00:26:33.713 EOF 00:26:33.713 )") 00:26:33.713 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:26:33.713 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:26:33.713 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:26:33.713 14:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:33.713 "params": { 00:26:33.713 "name": "Nvme0", 00:26:33.713 "trtype": "tcp", 00:26:33.713 "traddr": "10.0.0.2", 00:26:33.713 "adrfam": "ipv4", 00:26:33.713 "trsvcid": "4420", 00:26:33.713 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:33.713 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:33.713 "hdgst": false, 00:26:33.713 "ddgst": false 00:26:33.713 }, 00:26:33.713 "method": "bdev_nvme_attach_controller" 00:26:33.713 }' 00:26:33.713 [2024-11-06 14:10:12.799554] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:26:33.713 [2024-11-06 14:10:12.799604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1077283 ] 00:26:33.713 [2024-11-06 14:10:12.877687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.713 [2024-11-06 14:10:12.913817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.971 Running I/O for 10 seconds... 00:26:34.539 14:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:34.539 14:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:26:34.539 14:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:34.539 14:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.539 14:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:34.539 14:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.539 14:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:34.539 14:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:26:34.539 14:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:34.539 14:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:26:34.539 14:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:26:34.539 14:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:26:34.539 14:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:26:34.539 14:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:26:34.539 14:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:26:34.539 14:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:26:34.539 14:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.539 14:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:34.539 14:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.539 14:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:26:34.539 14:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:26:34.539 14:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:26:34.539 14:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:26:34.539 14:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:26:34.539 14:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:26:34.539 14:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.539 14:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:34.539 [2024-11-06 14:10:13.629895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.539 [2024-11-06 14:10:13.629932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.539 [2024-11-06 14:10:13.629942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.539 [2024-11-06 14:10:13.629953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.629961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.629973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.629980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.629986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.629993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.629999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17800 is same with the state(6) to be set 00:26:34.540 [2024-11-06 14:10:13.630364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.540 [2024-11-06 14:10:13.630401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.540 [2024-11-06 14:10:13.630419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.540 [2024-11-06 14:10:13.630427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.540 [2024-11-06 14:10:13.630437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.540 [2024-11-06 14:10:13.630445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.540 [2024-11-06 14:10:13.630455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.540 [2024-11-06 14:10:13.630462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.540 [2024-11-06 14:10:13.630472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.540 [2024-11-06 14:10:13.630479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.540 [2024-11-06 14:10:13.630493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.540 [2024-11-06 14:10:13.630501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.540 [2024-11-06 14:10:13.630511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.540 [2024-11-06 14:10:13.630518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.540 [2024-11-06 14:10:13.630528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.540 [2024-11-06 14:10:13.630535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.540 [2024-11-06 14:10:13.630546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.540 [2024-11-06 14:10:13.630554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.540 [2024-11-06 14:10:13.630565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.540 [2024-11-06 14:10:13.630573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.540 [2024-11-06 14:10:13.630582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.540 [2024-11-06 14:10:13.630589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.540 [2024-11-06 14:10:13.630599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.540 [2024-11-06 14:10:13.630608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.540 [2024-11-06 14:10:13.630618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.540 [2024-11-06 14:10:13.630626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.540 [2024-11-06 14:10:13.630638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.540 [2024-11-06 14:10:13.630645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.540 [2024-11-06 14:10:13.630655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.540 [2024-11-06 14:10:13.630663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.540 [2024-11-06 14:10:13.630672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.540 [2024-11-06 14:10:13.630680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.540 [2024-11-06 14:10:13.630689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.540 [2024-11-06 14:10:13.630696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.540 [2024-11-06 14:10:13.630706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.540 [2024-11-06 14:10:13.630715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.540 [2024-11-06 14:10:13.630724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.630731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.630741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.630748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.630758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.630765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.630774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.630782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.630791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.630798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.630807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.630814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.630823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.630831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.630840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.630847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.630857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.630865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.630874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.630881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.630890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.630897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.630908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.630915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.630926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.630933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.630942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.630949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.630958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.630967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.630976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.630983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.630993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.631000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.631010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.631017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.631027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.631034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.631044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.631051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.631060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.631068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.631078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.631085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.631094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.631101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.631110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.631118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.631127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.631136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.631145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.631152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.631162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.631170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.631179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.631186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.631195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.631202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.631211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.631219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.631228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.631235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.631249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.631256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.631265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.631273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.631283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.631290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.631299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.631306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.631315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.631322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.631333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.631340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.631351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.631358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.631368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.631375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.541 [2024-11-06 14:10:13.631384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.541 [2024-11-06 14:10:13.631391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.542 [2024-11-06 14:10:13.631401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.542 [2024-11-06 14:10:13.631408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.542 [2024-11-06 14:10:13.631417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.542 [2024-11-06 14:10:13.631424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.542 [2024-11-06 14:10:13.631433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.542 [2024-11-06 14:10:13.631441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.542 [2024-11-06 14:10:13.631450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.542 [2024-11-06 14:10:13.631457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.542 [2024-11-06 14:10:13.631466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.542 [2024-11-06 14:10:13.631473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.542 [2024-11-06 14:10:13.631482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.542 [2024-11-06 14:10:13.631490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.542 [2024-11-06 14:10:13.631517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.542 [2024-11-06 14:10:13.632732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:34.542 task offset: 81920 on job bdev=Nvme0n1 fails 00:26:34.542 00:26:34.542 Latency(us) 00:26:34.542 [2024-11-06T13:10:13.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.542 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.542 Job: Nvme0n1 ended in about 0.41 seconds with error 00:26:34.542 Verification LBA range: start 0x0 length 0x400 00:26:34.542 Nvme0n1 : 0.41 1552.56 97.04 155.26 0.00 36313.81 2949.12 31675.73 00:26:34.542 [2024-11-06T13:10:13.826Z] =================================================================================================================== 00:26:34.542 [2024-11-06T13:10:13.826Z] Total : 1552.56 97.04 155.26 0.00 36313.81 2949.12 31675.73 00:26:34.542 14:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.542 14:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:26:34.542 [2024-11-06 14:10:13.634748] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:34.542 [2024-11-06 14:10:13.634771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10cfb00 (9): Bad file descriptor 00:26:34.542 14:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.542 14:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:34.542 [2024-11-06 14:10:13.636051] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:26:34.542 [2024-11-06 14:10:13.636131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:34.542 [2024-11-06 14:10:13.636160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.542 [2024-11-06 14:10:13.636178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:26:34.542 [2024-11-06 14:10:13.636187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:26:34.542 [2024-11-06 14:10:13.636195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.542 [2024-11-06 14:10:13.636202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10cfb00 00:26:34.542 [2024-11-06 14:10:13.636223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10cfb00 (9): Bad file descriptor 00:26:34.542 [2024-11-06 14:10:13.636236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:34.542 [2024-11-06 14:10:13.636252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:34.542 [2024-11-06 14:10:13.636266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:34.542 [2024-11-06 14:10:13.636279] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:34.542 14:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.542 14:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:26:35.488 14:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1077283 00:26:35.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1077283) - No such process 00:26:35.488 14:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:26:35.488 14:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:26:35.488 14:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:26:35.488 14:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:26:35.488 14:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:26:35.488 14:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:26:35.488 14:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:35.488 14:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:35.488 { 00:26:35.488 "params": { 00:26:35.488 "name": "Nvme$subsystem", 00:26:35.488 "trtype": "$TEST_TRANSPORT", 00:26:35.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:35.488 "adrfam": "ipv4", 00:26:35.488 "trsvcid": "$NVMF_PORT", 00:26:35.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:35.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:35.488 "hdgst": ${hdgst:-false}, 00:26:35.488 "ddgst": ${ddgst:-false} 00:26:35.488 }, 00:26:35.488 "method": "bdev_nvme_attach_controller" 00:26:35.488 } 00:26:35.488 EOF 00:26:35.488 )") 00:26:35.488 14:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:26:35.488 14:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:26:35.488 14:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:26:35.488 14:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:35.488 "params": { 00:26:35.488 "name": "Nvme0", 00:26:35.488 "trtype": "tcp", 00:26:35.488 "traddr": "10.0.0.2", 00:26:35.488 "adrfam": "ipv4", 00:26:35.488 "trsvcid": "4420", 00:26:35.488 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:35.488 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:35.488 "hdgst": false, 00:26:35.488 "ddgst": false 00:26:35.488 }, 00:26:35.488 "method": "bdev_nvme_attach_controller" 00:26:35.488 }' 00:26:35.488 [2024-11-06 14:10:14.679807] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:26:35.488 [2024-11-06 14:10:14.679865] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1077658 ] 00:26:35.488 [2024-11-06 14:10:14.757466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.747 [2024-11-06 14:10:14.792463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.747 Running I/O for 1 seconds... 00:26:37.125 1809.00 IOPS, 113.06 MiB/s 00:26:37.125 Latency(us) 00:26:37.125 [2024-11-06T13:10:16.409Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.125 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:37.125 Verification LBA range: start 0x0 length 0x400 00:26:37.125 Nvme0n1 : 1.01 1853.50 115.84 0.00 0.00 33804.37 3358.72 34078.72 00:26:37.125 [2024-11-06T13:10:16.409Z] =================================================================================================================== 00:26:37.125 [2024-11-06T13:10:16.409Z] Total : 1853.50 115.84 0.00 0.00 33804.37 3358.72 34078.72 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:37.125 rmmod nvme_tcp 00:26:37.125 rmmod nvme_fabrics 00:26:37.125 rmmod nvme_keyring 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1077091 ']' 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1077091 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 1077091 ']' 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 1077091 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1077091 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1077091' 00:26:37.125 killing process with pid 1077091 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 1077091 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 1077091 00:26:37.125 [2024-11-06 14:10:16.312242] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:37.125 14:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:26:39.663 00:26:39.663 real 0m12.054s 00:26:39.663 user 0m17.527s 00:26:39.663 sys 0m5.468s 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:39.663 ************************************ 00:26:39.663 END TEST nvmf_host_management 00:26:39.663 ************************************ 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:39.663 ************************************ 00:26:39.663 START TEST nvmf_lvol 00:26:39.663 ************************************ 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:26:39.663 * Looking for test storage... 00:26:39.663 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:39.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.663 --rc genhtml_branch_coverage=1 00:26:39.663 --rc genhtml_function_coverage=1 00:26:39.663 --rc genhtml_legend=1 00:26:39.663 --rc geninfo_all_blocks=1 00:26:39.663 --rc geninfo_unexecuted_blocks=1 00:26:39.663 00:26:39.663 ' 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:39.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.663 --rc genhtml_branch_coverage=1 00:26:39.663 --rc genhtml_function_coverage=1 00:26:39.663 --rc genhtml_legend=1 00:26:39.663 --rc geninfo_all_blocks=1 00:26:39.663 --rc geninfo_unexecuted_blocks=1 00:26:39.663 00:26:39.663 ' 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:39.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.663 --rc genhtml_branch_coverage=1 00:26:39.663 --rc genhtml_function_coverage=1 00:26:39.663 --rc genhtml_legend=1 00:26:39.663 --rc geninfo_all_blocks=1 00:26:39.663 --rc geninfo_unexecuted_blocks=1 00:26:39.663 00:26:39.663 ' 00:26:39.663 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:39.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.663 --rc genhtml_branch_coverage=1 00:26:39.663 --rc genhtml_function_coverage=1 00:26:39.663 --rc genhtml_legend=1 00:26:39.663 --rc geninfo_all_blocks=1 00:26:39.663 --rc geninfo_unexecuted_blocks=1 00:26:39.663 00:26:39.663 ' 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:39.664 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:39.665 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:26:39.665 14:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:44.935 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:44.935 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:44.935 Found net devices under 0000:31:00.0: cvl_0_0 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:44.935 Found net devices under 0000:31:00.1: cvl_0_1 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:44.935 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:44.936 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:44.936 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:44.936 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:44.936 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:44.936 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:44.936 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:44.936 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:44.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:44.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.568 ms 00:26:44.936 00:26:44.936 --- 10.0.0.2 ping statistics --- 00:26:44.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.936 rtt min/avg/max/mdev = 0.568/0.568/0.568/0.000 ms 00:26:44.936 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:44.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:44.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:26:44.936 00:26:44.936 --- 10.0.0.1 ping statistics --- 00:26:44.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.936 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:26:44.936 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:44.936 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:26:44.936 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:44.936 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:44.936 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:44.936 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:44.936 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:44.936 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:44.936 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:44.936 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:26:44.936 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:44.936 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:44.936 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:26:44.936 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1082323 00:26:44.936 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1082323 00:26:44.936 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 1082323 ']' 00:26:44.936 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.936 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:44.936 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:44.936 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:44.936 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:26:44.936 14:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:26:44.936 [2024-11-06 14:10:23.977506] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:44.936 [2024-11-06 14:10:23.978516] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:26:44.936 [2024-11-06 14:10:23.978551] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:44.936 [2024-11-06 14:10:24.063964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:44.936 [2024-11-06 14:10:24.100504] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:44.936 [2024-11-06 14:10:24.100537] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:44.936 [2024-11-06 14:10:24.100547] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:44.936 [2024-11-06 14:10:24.100553] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:44.936 [2024-11-06 14:10:24.100560] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:44.936 [2024-11-06 14:10:24.101825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:44.936 [2024-11-06 14:10:24.101950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:44.936 [2024-11-06 14:10:24.101971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.936 [2024-11-06 14:10:24.158237] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:44.936 [2024-11-06 14:10:24.158530] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:44.936 [2024-11-06 14:10:24.158621] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:44.936 [2024-11-06 14:10:24.159087] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:45.504 14:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:45.504 14:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:26:45.504 14:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:45.504 14:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:45.504 14:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:26:45.504 14:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:45.504 14:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:45.763 [2024-11-06 14:10:24.922621] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:45.763 14:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:46.023 14:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:26:46.023 14:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:46.283 14:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:26:46.284 14:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:26:46.284 14:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:26:46.543 14:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=23bfef7b-b9e2-4541-99b3-6c3dc255102a 00:26:46.543 14:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 23bfef7b-b9e2-4541-99b3-6c3dc255102a lvol 20 00:26:46.802 14:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=843696b6-c08f-4321-834a-1ecc948b7492 00:26:46.802 14:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:26:46.802 14:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 843696b6-c08f-4321-834a-1ecc948b7492 00:26:47.061 14:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:47.061 [2024-11-06 14:10:26.322682] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:47.061 14:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:47.321 14:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1082999 00:26:47.321 14:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:26:47.321 14:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:26:48.261 14:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 843696b6-c08f-4321-834a-1ecc948b7492 MY_SNAPSHOT 00:26:48.520 14:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=769b2160-fed6-4951-a81f-87015927c3fd 00:26:48.520 14:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 843696b6-c08f-4321-834a-1ecc948b7492 30 00:26:48.779 14:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 769b2160-fed6-4951-a81f-87015927c3fd MY_CLONE 00:26:49.039 14:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=bc8b844f-5a41-4cfa-aae6-1c2d58945e85 00:26:49.039 14:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate bc8b844f-5a41-4cfa-aae6-1c2d58945e85 00:26:49.298 14:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1082999 00:26:57.541 Initializing NVMe Controllers 00:26:57.541 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:26:57.541 Controller IO queue size 128, less than required. 00:26:57.541 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:57.541 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:26:57.541 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:26:57.541 Initialization complete. Launching workers. 00:26:57.541 ======================================================== 00:26:57.541 Latency(us) 00:26:57.541 Device Information : IOPS MiB/s Average min max 00:26:57.541 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16565.66 64.71 7729.78 1354.19 52424.37 00:26:57.541 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16267.86 63.55 7870.26 1380.66 52072.03 00:26:57.541 ======================================================== 00:26:57.541 Total : 32833.52 128.26 7799.39 1354.19 52424.37 00:26:57.541 00:26:57.541 14:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:57.801 14:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 843696b6-c08f-4321-834a-1ecc948b7492 00:26:57.801 14:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 23bfef7b-b9e2-4541-99b3-6c3dc255102a 00:26:58.060 14:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:26:58.060 14:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:26:58.060 14:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:26:58.060 14:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:58.060 14:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:26:58.060 14:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:58.060 14:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:26:58.060 14:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:58.060 14:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:58.060 rmmod nvme_tcp 00:26:58.060 rmmod nvme_fabrics 00:26:58.060 rmmod nvme_keyring 00:26:58.060 14:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:58.060 14:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:26:58.060 14:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:26:58.060 14:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1082323 ']' 00:26:58.060 14:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1082323 00:26:58.060 14:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 1082323 ']' 00:26:58.060 14:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 1082323 00:26:58.060 14:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:26:58.060 14:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:58.060 14:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1082323 00:26:58.060 14:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:58.060 14:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:58.060 14:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1082323' 00:26:58.060 killing process with pid 1082323 00:26:58.060 14:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 1082323 00:26:58.060 14:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 1082323 00:26:58.319 14:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:58.319 14:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:58.319 14:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:58.319 14:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:26:58.319 14:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:26:58.319 14:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:58.319 14:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:26:58.319 14:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:58.319 14:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:58.319 14:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.319 14:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:58.319 14:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:00.224 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:00.224 00:27:00.224 real 0m21.011s 00:27:00.224 user 0m53.939s 00:27:00.224 sys 0m8.716s 00:27:00.224 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:00.224 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:00.224 ************************************ 00:27:00.224 END TEST nvmf_lvol 00:27:00.224 ************************************ 00:27:00.224 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:27:00.224 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:00.224 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:00.224 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:00.224 ************************************ 00:27:00.224 START TEST nvmf_lvs_grow 00:27:00.224 ************************************ 00:27:00.224 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:27:00.485 * Looking for test storage... 00:27:00.485 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:00.485 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:00.485 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:27:00.485 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:00.485 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:00.485 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:00.485 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:00.485 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:00.485 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:27:00.485 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:27:00.485 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:27:00.485 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:27:00.485 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:27:00.485 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:27:00.485 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:27:00.485 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:00.485 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:27:00.485 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:27:00.485 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:00.485 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:00.485 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:27:00.485 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:27:00.485 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:00.485 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:27:00.485 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:27:00.485 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:27:00.485 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:27:00.485 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:00.485 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:27:00.485 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:27:00.485 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:00.485 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:00.485 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:27:00.485 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:00.485 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:00.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.485 --rc genhtml_branch_coverage=1 00:27:00.485 --rc genhtml_function_coverage=1 00:27:00.485 --rc genhtml_legend=1 00:27:00.485 --rc geninfo_all_blocks=1 00:27:00.485 --rc geninfo_unexecuted_blocks=1 00:27:00.485 00:27:00.485 ' 00:27:00.485 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:00.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.485 --rc genhtml_branch_coverage=1 00:27:00.485 --rc genhtml_function_coverage=1 00:27:00.485 --rc genhtml_legend=1 00:27:00.485 --rc geninfo_all_blocks=1 00:27:00.485 --rc geninfo_unexecuted_blocks=1 00:27:00.485 00:27:00.485 ' 00:27:00.485 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:00.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.485 --rc genhtml_branch_coverage=1 00:27:00.485 --rc genhtml_function_coverage=1 00:27:00.485 --rc genhtml_legend=1 00:27:00.485 --rc geninfo_all_blocks=1 00:27:00.485 --rc geninfo_unexecuted_blocks=1 00:27:00.485 00:27:00.485 ' 00:27:00.485 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:00.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.486 --rc genhtml_branch_coverage=1 00:27:00.486 --rc genhtml_function_coverage=1 00:27:00.486 --rc genhtml_legend=1 00:27:00.486 --rc geninfo_all_blocks=1 00:27:00.486 --rc geninfo_unexecuted_blocks=1 00:27:00.486 00:27:00.486 ' 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:27:00.486 14:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:05.763 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:05.763 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:27:05.763 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:05.763 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:05.763 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:05.763 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:05.763 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:05.763 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:27:05.763 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:05.763 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:27:05.763 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:27:05.763 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:27:05.763 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:27:05.763 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:27:05.763 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:27:05.763 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:05.763 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:05.763 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:05.763 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:05.763 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:05.763 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:05.764 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:05.764 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:05.764 Found net devices under 0000:31:00.0: cvl_0_0 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:05.764 Found net devices under 0000:31:00.1: cvl_0_1 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:05.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:05.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.559 ms 00:27:05.764 00:27:05.764 --- 10.0.0.2 ping statistics --- 00:27:05.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.764 rtt min/avg/max/mdev = 0.559/0.559/0.559/0.000 ms 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:05.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:05.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:27:05.764 00:27:05.764 --- 10.0.0.1 ping statistics --- 00:27:05.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.764 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1089664 00:27:05.764 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1089664 00:27:05.765 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 1089664 ']' 00:27:05.765 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:05.765 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:05.765 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:05.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:05.765 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:05.765 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:05.765 14:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:27:05.765 [2024-11-06 14:10:44.859707] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:05.765 [2024-11-06 14:10:44.860693] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:27:05.765 [2024-11-06 14:10:44.860730] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:05.765 [2024-11-06 14:10:44.930885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.765 [2024-11-06 14:10:44.959737] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:05.765 [2024-11-06 14:10:44.959761] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:05.765 [2024-11-06 14:10:44.959768] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:05.765 [2024-11-06 14:10:44.959772] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:05.765 [2024-11-06 14:10:44.959777] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:05.765 [2024-11-06 14:10:44.960218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.765 [2024-11-06 14:10:45.011628] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:05.765 [2024-11-06 14:10:45.011813] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:05.765 14:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:05.765 14:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:27:05.765 14:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:05.765 14:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:05.765 14:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:06.026 14:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:06.026 14:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:06.026 [2024-11-06 14:10:45.200914] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:06.026 14:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:27:06.026 14:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:06.026 14:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:06.026 14:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:06.026 ************************************ 00:27:06.026 START TEST lvs_grow_clean 00:27:06.026 ************************************ 00:27:06.026 14:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:27:06.026 14:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:27:06.026 14:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:27:06.026 14:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:27:06.026 14:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:27:06.026 14:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:27:06.026 14:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:27:06.026 14:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:06.026 14:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:06.026 14:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:06.285 14:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:27:06.285 14:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:27:06.545 14:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=9f06cebd-98d6-40d9-9325-141f1ae05dea 00:27:06.545 14:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9f06cebd-98d6-40d9-9325-141f1ae05dea 00:27:06.545 14:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:27:06.545 14:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:27:06.545 14:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:27:06.545 14:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9f06cebd-98d6-40d9-9325-141f1ae05dea lvol 150 00:27:06.805 14:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=55c49117-5d76-4964-bbd6-b0083d56dbdd 00:27:06.805 14:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:06.805 14:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:27:06.805 [2024-11-06 14:10:46.044618] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:27:06.805 [2024-11-06 14:10:46.044760] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:27:06.805 true 00:27:06.805 14:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9f06cebd-98d6-40d9-9325-141f1ae05dea 00:27:06.805 14:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:27:07.065 14:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:27:07.065 14:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:07.325 14:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 55c49117-5d76-4964-bbd6-b0083d56dbdd 00:27:07.325 14:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:07.586 [2024-11-06 14:10:46.661138] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:07.586 14:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:07.586 14:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1090044 00:27:07.586 14:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:07.586 14:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1090044 /var/tmp/bdevperf.sock 00:27:07.586 14:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 1090044 ']' 00:27:07.586 14:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:07.586 14:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:07.586 14:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:07.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:07.586 14:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:07.586 14:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:27:07.586 14:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:27:07.586 [2024-11-06 14:10:46.865223] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:27:07.586 [2024-11-06 14:10:46.865284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1090044 ] 00:27:07.846 [2024-11-06 14:10:46.943043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.846 [2024-11-06 14:10:46.981907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:08.416 14:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:08.416 14:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:27:08.416 14:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:27:08.985 Nvme0n1 00:27:08.985 14:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:27:08.985 [ 00:27:08.985 { 00:27:08.985 "name": "Nvme0n1", 00:27:08.985 "aliases": [ 00:27:08.985 "55c49117-5d76-4964-bbd6-b0083d56dbdd" 00:27:08.985 ], 00:27:08.985 "product_name": "NVMe disk", 00:27:08.985 "block_size": 4096, 00:27:08.985 "num_blocks": 38912, 00:27:08.985 "uuid": "55c49117-5d76-4964-bbd6-b0083d56dbdd", 00:27:08.985 "numa_id": 0, 00:27:08.985 "assigned_rate_limits": { 00:27:08.985 "rw_ios_per_sec": 0, 00:27:08.985 "rw_mbytes_per_sec": 0, 00:27:08.985 "r_mbytes_per_sec": 0, 00:27:08.985 "w_mbytes_per_sec": 0 00:27:08.985 }, 00:27:08.985 "claimed": false, 00:27:08.985 "zoned": false, 00:27:08.985 "supported_io_types": { 00:27:08.985 "read": true, 00:27:08.985 "write": true, 00:27:08.985 "unmap": true, 00:27:08.985 "flush": true, 00:27:08.985 "reset": true, 00:27:08.985 "nvme_admin": true, 00:27:08.985 "nvme_io": true, 00:27:08.985 "nvme_io_md": false, 00:27:08.985 "write_zeroes": true, 00:27:08.985 "zcopy": false, 00:27:08.985 "get_zone_info": false, 00:27:08.985 "zone_management": false, 00:27:08.985 "zone_append": false, 00:27:08.985 "compare": true, 00:27:08.985 "compare_and_write": true, 00:27:08.985 "abort": true, 00:27:08.985 "seek_hole": false, 00:27:08.985 "seek_data": false, 00:27:08.985 "copy": true, 00:27:08.985 "nvme_iov_md": false 00:27:08.985 }, 00:27:08.985 "memory_domains": [ 00:27:08.985 { 00:27:08.985 "dma_device_id": "system", 00:27:08.985 "dma_device_type": 1 00:27:08.985 } 00:27:08.985 ], 00:27:08.985 "driver_specific": { 00:27:08.985 "nvme": [ 00:27:08.985 { 00:27:08.985 "trid": { 00:27:08.985 "trtype": "TCP", 00:27:08.985 "adrfam": "IPv4", 00:27:08.985 "traddr": "10.0.0.2", 00:27:08.985 "trsvcid": "4420", 00:27:08.985 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:08.985 }, 00:27:08.985 "ctrlr_data": { 00:27:08.985 "cntlid": 1, 00:27:08.985 "vendor_id": "0x8086", 00:27:08.985 "model_number": "SPDK bdev Controller", 00:27:08.985 "serial_number": "SPDK0", 00:27:08.985 "firmware_revision": "25.01", 00:27:08.985 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:08.985 "oacs": { 00:27:08.985 "security": 0, 00:27:08.985 "format": 0, 00:27:08.985 "firmware": 0, 00:27:08.985 "ns_manage": 0 00:27:08.985 }, 00:27:08.985 "multi_ctrlr": true, 00:27:08.985 "ana_reporting": false 00:27:08.985 }, 00:27:08.985 "vs": { 00:27:08.985 "nvme_version": "1.3" 00:27:08.985 }, 00:27:08.985 "ns_data": { 00:27:08.985 "id": 1, 00:27:08.985 "can_share": true 00:27:08.985 } 00:27:08.985 } 00:27:08.985 ], 00:27:08.985 "mp_policy": "active_passive" 00:27:08.985 } 00:27:08.985 } 00:27:08.985 ] 00:27:08.985 14:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1090379 00:27:08.985 14:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:27:08.985 14:10:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:08.985 Running I/O for 10 seconds... 00:27:10.358 Latency(us) 00:27:10.358 [2024-11-06T13:10:49.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:10.359 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:10.359 Nvme0n1 : 1.00 17790.00 69.49 0.00 0.00 0.00 0.00 0.00 00:27:10.359 [2024-11-06T13:10:49.643Z] =================================================================================================================== 00:27:10.359 [2024-11-06T13:10:49.643Z] Total : 17790.00 69.49 0.00 0.00 0.00 0.00 0.00 00:27:10.359 00:27:10.927 14:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9f06cebd-98d6-40d9-9325-141f1ae05dea 00:27:11.187 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:11.187 Nvme0n1 : 2.00 17848.50 69.72 0.00 0.00 0.00 0.00 0.00 00:27:11.187 [2024-11-06T13:10:50.471Z] =================================================================================================================== 00:27:11.187 [2024-11-06T13:10:50.471Z] Total : 17848.50 69.72 0.00 0.00 0.00 0.00 0.00 00:27:11.187 00:27:11.187 true 00:27:11.187 14:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9f06cebd-98d6-40d9-9325-141f1ae05dea 00:27:11.187 14:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:27:11.446 14:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:27:11.446 14:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:27:11.446 14:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1090379 00:27:12.013 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:12.013 Nvme0n1 : 3.00 17868.00 69.80 0.00 0.00 0.00 0.00 0.00 00:27:12.013 [2024-11-06T13:10:51.297Z] =================================================================================================================== 00:27:12.013 [2024-11-06T13:10:51.297Z] Total : 17868.00 69.80 0.00 0.00 0.00 0.00 0.00 00:27:12.013 00:27:13.391 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:13.391 Nvme0n1 : 4.00 18735.75 73.19 0.00 0.00 0.00 0.00 0.00 00:27:13.391 [2024-11-06T13:10:52.675Z] =================================================================================================================== 00:27:13.391 [2024-11-06T13:10:52.675Z] Total : 18735.75 73.19 0.00 0.00 0.00 0.00 0.00 00:27:13.391 00:27:14.323 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:14.323 Nvme0n1 : 5.00 20069.20 78.40 0.00 0.00 0.00 0.00 0.00 00:27:14.323 [2024-11-06T13:10:53.607Z] =================================================================================================================== 00:27:14.323 [2024-11-06T13:10:53.607Z] Total : 20069.20 78.40 0.00 0.00 0.00 0.00 0.00 00:27:14.323 00:27:15.259 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:15.259 Nvme0n1 : 6.00 20958.17 81.87 0.00 0.00 0.00 0.00 0.00 00:27:15.259 [2024-11-06T13:10:54.543Z] =================================================================================================================== 00:27:15.259 [2024-11-06T13:10:54.543Z] Total : 20958.17 81.87 0.00 0.00 0.00 0.00 0.00 00:27:15.259 00:27:16.195 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:16.195 Nvme0n1 : 7.00 21588.71 84.33 0.00 0.00 0.00 0.00 0.00 00:27:16.195 [2024-11-06T13:10:55.479Z] =================================================================================================================== 00:27:16.195 [2024-11-06T13:10:55.479Z] Total : 21588.71 84.33 0.00 0.00 0.00 0.00 0.00 00:27:16.195 00:27:17.130 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:17.130 Nvme0n1 : 8.00 22065.12 86.19 0.00 0.00 0.00 0.00 0.00 00:27:17.130 [2024-11-06T13:10:56.414Z] =================================================================================================================== 00:27:17.130 [2024-11-06T13:10:56.414Z] Total : 22065.12 86.19 0.00 0.00 0.00 0.00 0.00 00:27:17.130 00:27:18.065 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:18.065 Nvme0n1 : 9.00 22442.89 87.67 0.00 0.00 0.00 0.00 0.00 00:27:18.065 [2024-11-06T13:10:57.349Z] =================================================================================================================== 00:27:18.065 [2024-11-06T13:10:57.349Z] Total : 22442.89 87.67 0.00 0.00 0.00 0.00 0.00 00:27:18.065 00:27:19.442 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:19.442 Nvme0n1 : 10.00 22738.90 88.82 0.00 0.00 0.00 0.00 0.00 00:27:19.442 [2024-11-06T13:10:58.726Z] =================================================================================================================== 00:27:19.442 [2024-11-06T13:10:58.726Z] Total : 22738.90 88.82 0.00 0.00 0.00 0.00 0.00 00:27:19.442 00:27:19.442 00:27:19.442 Latency(us) 00:27:19.442 [2024-11-06T13:10:58.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:19.442 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:19.442 Nvme0n1 : 10.00 22744.91 88.85 0.00 0.00 5625.00 2184.53 13052.59 00:27:19.442 [2024-11-06T13:10:58.726Z] =================================================================================================================== 00:27:19.442 [2024-11-06T13:10:58.726Z] Total : 22744.91 88.85 0.00 0.00 5625.00 2184.53 13052.59 00:27:19.442 { 00:27:19.442 "results": [ 00:27:19.442 { 00:27:19.442 "job": "Nvme0n1", 00:27:19.442 "core_mask": "0x2", 00:27:19.442 "workload": "randwrite", 00:27:19.442 "status": "finished", 00:27:19.442 "queue_depth": 128, 00:27:19.442 "io_size": 4096, 00:27:19.442 "runtime": 10.002987, 00:27:19.442 "iops": 22744.906096548963, 00:27:19.442 "mibps": 88.84728943964438, 00:27:19.442 "io_failed": 0, 00:27:19.442 "io_timeout": 0, 00:27:19.442 "avg_latency_us": 5625.000714085834, 00:27:19.442 "min_latency_us": 2184.5333333333333, 00:27:19.442 "max_latency_us": 13052.586666666666 00:27:19.442 } 00:27:19.442 ], 00:27:19.442 "core_count": 1 00:27:19.442 } 00:27:19.442 14:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1090044 00:27:19.442 14:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 1090044 ']' 00:27:19.442 14:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 1090044 00:27:19.442 14:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:27:19.442 14:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:19.442 14:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1090044 00:27:19.442 14:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:19.442 14:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:19.442 14:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1090044' 00:27:19.442 killing process with pid 1090044 00:27:19.442 14:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 1090044 00:27:19.442 Received shutdown signal, test time was about 10.000000 seconds 00:27:19.442 00:27:19.442 Latency(us) 00:27:19.442 [2024-11-06T13:10:58.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:19.442 [2024-11-06T13:10:58.726Z] =================================================================================================================== 00:27:19.442 [2024-11-06T13:10:58.726Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:19.442 14:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 1090044 00:27:19.442 14:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:19.442 14:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:19.700 14:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9f06cebd-98d6-40d9-9325-141f1ae05dea 00:27:19.700 14:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:27:19.701 14:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:27:19.701 14:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:27:19.701 14:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:27:19.959 [2024-11-06 14:10:59.088698] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:27:19.959 14:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9f06cebd-98d6-40d9-9325-141f1ae05dea 00:27:19.959 14:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:27:19.959 14:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9f06cebd-98d6-40d9-9325-141f1ae05dea 00:27:19.959 14:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:19.959 14:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:19.960 14:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:19.960 14:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:19.960 14:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:19.960 14:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:19.960 14:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:19.960 14:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:27:19.960 14:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9f06cebd-98d6-40d9-9325-141f1ae05dea 00:27:20.219 request: 00:27:20.219 { 00:27:20.219 "uuid": "9f06cebd-98d6-40d9-9325-141f1ae05dea", 00:27:20.219 "method": "bdev_lvol_get_lvstores", 00:27:20.219 "req_id": 1 00:27:20.219 } 00:27:20.219 Got JSON-RPC error response 00:27:20.219 response: 00:27:20.219 { 00:27:20.219 "code": -19, 00:27:20.219 "message": "No such device" 00:27:20.219 } 00:27:20.219 14:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:27:20.219 14:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:20.219 14:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:20.219 14:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:20.219 14:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:20.219 aio_bdev 00:27:20.219 14:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 55c49117-5d76-4964-bbd6-b0083d56dbdd 00:27:20.219 14:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=55c49117-5d76-4964-bbd6-b0083d56dbdd 00:27:20.219 14:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:20.219 14:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:27:20.219 14:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:20.219 14:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:20.219 14:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:20.479 14:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 55c49117-5d76-4964-bbd6-b0083d56dbdd -t 2000 00:27:20.740 [ 00:27:20.740 { 00:27:20.740 "name": "55c49117-5d76-4964-bbd6-b0083d56dbdd", 00:27:20.740 "aliases": [ 00:27:20.740 "lvs/lvol" 00:27:20.740 ], 00:27:20.740 "product_name": "Logical Volume", 00:27:20.740 "block_size": 4096, 00:27:20.740 "num_blocks": 38912, 00:27:20.740 "uuid": "55c49117-5d76-4964-bbd6-b0083d56dbdd", 00:27:20.740 "assigned_rate_limits": { 00:27:20.740 "rw_ios_per_sec": 0, 00:27:20.740 "rw_mbytes_per_sec": 0, 00:27:20.740 "r_mbytes_per_sec": 0, 00:27:20.740 "w_mbytes_per_sec": 0 00:27:20.740 }, 00:27:20.740 "claimed": false, 00:27:20.740 "zoned": false, 00:27:20.740 "supported_io_types": { 00:27:20.740 "read": true, 00:27:20.740 "write": true, 00:27:20.740 "unmap": true, 00:27:20.740 "flush": false, 00:27:20.740 "reset": true, 00:27:20.740 "nvme_admin": false, 00:27:20.740 "nvme_io": false, 00:27:20.740 "nvme_io_md": false, 00:27:20.740 "write_zeroes": true, 00:27:20.740 "zcopy": false, 00:27:20.740 "get_zone_info": false, 00:27:20.740 "zone_management": false, 00:27:20.740 "zone_append": false, 00:27:20.740 "compare": false, 00:27:20.740 "compare_and_write": false, 00:27:20.740 "abort": false, 00:27:20.740 "seek_hole": true, 00:27:20.740 "seek_data": true, 00:27:20.740 "copy": false, 00:27:20.740 "nvme_iov_md": false 00:27:20.740 }, 00:27:20.740 "driver_specific": { 00:27:20.740 "lvol": { 00:27:20.740 "lvol_store_uuid": "9f06cebd-98d6-40d9-9325-141f1ae05dea", 00:27:20.740 "base_bdev": "aio_bdev", 00:27:20.740 "thin_provision": false, 00:27:20.740 "num_allocated_clusters": 38, 00:27:20.740 "snapshot": false, 00:27:20.740 "clone": false, 00:27:20.740 "esnap_clone": false 00:27:20.740 } 00:27:20.740 } 00:27:20.740 } 00:27:20.740 ] 00:27:20.740 14:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:27:20.740 14:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9f06cebd-98d6-40d9-9325-141f1ae05dea 00:27:20.740 14:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:27:20.740 14:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:27:20.740 14:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9f06cebd-98d6-40d9-9325-141f1ae05dea 00:27:20.740 14:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:27:21.000 14:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:27:21.000 14:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 55c49117-5d76-4964-bbd6-b0083d56dbdd 00:27:21.259 14:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9f06cebd-98d6-40d9-9325-141f1ae05dea 00:27:21.259 14:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:27:21.518 14:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:21.518 00:27:21.518 real 0m15.430s 00:27:21.518 user 0m15.074s 00:27:21.518 sys 0m1.238s 00:27:21.518 14:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:21.518 14:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:27:21.518 ************************************ 00:27:21.518 END TEST lvs_grow_clean 00:27:21.518 ************************************ 00:27:21.518 14:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:27:21.518 14:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:21.518 14:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:21.518 14:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:21.518 ************************************ 00:27:21.518 START TEST lvs_grow_dirty 00:27:21.518 ************************************ 00:27:21.518 14:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:27:21.518 14:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:27:21.518 14:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:27:21.518 14:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:27:21.518 14:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:27:21.518 14:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:27:21.518 14:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:27:21.518 14:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:21.519 14:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:21.519 14:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:21.778 14:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:27:21.778 14:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:27:22.037 14:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=eab55934-84cf-4eb0-beaf-af6835950d73 00:27:22.038 14:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eab55934-84cf-4eb0-beaf-af6835950d73 00:27:22.038 14:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:27:22.038 14:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:27:22.038 14:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:27:22.038 14:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u eab55934-84cf-4eb0-beaf-af6835950d73 lvol 150 00:27:22.296 14:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1503d22d-c1a6-4825-88c7-846ff902829e 00:27:22.296 14:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:22.296 14:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:27:22.296 [2024-11-06 14:11:01.560622] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:27:22.296 [2024-11-06 14:11:01.560768] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:27:22.296 true 00:27:22.296 14:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eab55934-84cf-4eb0-beaf-af6835950d73 00:27:22.555 14:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:27:22.555 14:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:27:22.555 14:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:22.815 14:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1503d22d-c1a6-4825-88c7-846ff902829e 00:27:22.815 14:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:23.074 [2024-11-06 14:11:02.209118] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:23.074 14:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:23.334 14:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1093431 00:27:23.334 14:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:23.334 14:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1093431 /var/tmp/bdevperf.sock 00:27:23.334 14:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 1093431 ']' 00:27:23.334 14:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:23.334 14:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:23.334 14:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:23.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:23.334 14:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:27:23.334 14:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:23.334 14:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:27:23.334 [2024-11-06 14:11:02.412086] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:27:23.334 [2024-11-06 14:11:02.412139] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1093431 ] 00:27:23.334 [2024-11-06 14:11:02.477175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.334 [2024-11-06 14:11:02.506883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:23.334 14:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:23.334 14:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:27:23.334 14:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:27:23.901 Nvme0n1 00:27:23.901 14:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:27:23.901 [ 00:27:23.901 { 00:27:23.901 "name": "Nvme0n1", 00:27:23.901 "aliases": [ 00:27:23.901 "1503d22d-c1a6-4825-88c7-846ff902829e" 00:27:23.901 ], 00:27:23.901 "product_name": "NVMe disk", 00:27:23.901 "block_size": 4096, 00:27:23.901 "num_blocks": 38912, 00:27:23.901 "uuid": "1503d22d-c1a6-4825-88c7-846ff902829e", 00:27:23.901 "numa_id": 0, 00:27:23.901 "assigned_rate_limits": { 00:27:23.901 "rw_ios_per_sec": 0, 00:27:23.901 "rw_mbytes_per_sec": 0, 00:27:23.901 "r_mbytes_per_sec": 0, 00:27:23.901 "w_mbytes_per_sec": 0 00:27:23.901 }, 00:27:23.901 "claimed": false, 00:27:23.901 "zoned": false, 00:27:23.901 "supported_io_types": { 00:27:23.901 "read": true, 00:27:23.901 "write": true, 00:27:23.901 "unmap": true, 00:27:23.901 "flush": true, 00:27:23.901 "reset": true, 00:27:23.901 "nvme_admin": true, 00:27:23.901 "nvme_io": true, 00:27:23.901 "nvme_io_md": false, 00:27:23.901 "write_zeroes": true, 00:27:23.901 "zcopy": false, 00:27:23.901 "get_zone_info": false, 00:27:23.901 "zone_management": false, 00:27:23.901 "zone_append": false, 00:27:23.901 "compare": true, 00:27:23.902 "compare_and_write": true, 00:27:23.902 "abort": true, 00:27:23.902 "seek_hole": false, 00:27:23.902 "seek_data": false, 00:27:23.902 "copy": true, 00:27:23.902 "nvme_iov_md": false 00:27:23.902 }, 00:27:23.902 "memory_domains": [ 00:27:23.902 { 00:27:23.902 "dma_device_id": "system", 00:27:23.902 "dma_device_type": 1 00:27:23.902 } 00:27:23.902 ], 00:27:23.902 "driver_specific": { 00:27:23.902 "nvme": [ 00:27:23.902 { 00:27:23.902 "trid": { 00:27:23.902 "trtype": "TCP", 00:27:23.902 "adrfam": "IPv4", 00:27:23.902 "traddr": "10.0.0.2", 00:27:23.902 "trsvcid": "4420", 00:27:23.902 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:23.902 }, 00:27:23.902 "ctrlr_data": { 00:27:23.902 "cntlid": 1, 00:27:23.902 "vendor_id": "0x8086", 00:27:23.902 "model_number": "SPDK bdev Controller", 00:27:23.902 "serial_number": "SPDK0", 00:27:23.902 "firmware_revision": "25.01", 00:27:23.902 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:23.902 "oacs": { 00:27:23.902 "security": 0, 00:27:23.902 "format": 0, 00:27:23.902 "firmware": 0, 00:27:23.902 "ns_manage": 0 00:27:23.902 }, 00:27:23.902 "multi_ctrlr": true, 00:27:23.902 "ana_reporting": false 00:27:23.902 }, 00:27:23.902 "vs": { 00:27:23.902 "nvme_version": "1.3" 00:27:23.902 }, 00:27:23.902 "ns_data": { 00:27:23.902 "id": 1, 00:27:23.902 "can_share": true 00:27:23.902 } 00:27:23.902 } 00:27:23.902 ], 00:27:23.902 "mp_policy": "active_passive" 00:27:23.902 } 00:27:23.902 } 00:27:23.902 ] 00:27:23.902 14:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1093556 00:27:23.902 14:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:27:23.902 14:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:24.161 Running I/O for 10 seconds... 00:27:25.097 Latency(us) 00:27:25.097 [2024-11-06T13:11:04.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:25.097 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:25.097 Nvme0n1 : 1.00 25001.00 97.66 0.00 0.00 0.00 0.00 0.00 00:27:25.097 [2024-11-06T13:11:04.381Z] =================================================================================================================== 00:27:25.097 [2024-11-06T13:11:04.381Z] Total : 25001.00 97.66 0.00 0.00 0.00 0.00 0.00 00:27:25.097 00:27:26.097 14:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u eab55934-84cf-4eb0-beaf-af6835950d73 00:27:26.097 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:26.097 Nvme0n1 : 2.00 25139.50 98.20 0.00 0.00 0.00 0.00 0.00 00:27:26.097 [2024-11-06T13:11:05.381Z] =================================================================================================================== 00:27:26.097 [2024-11-06T13:11:05.381Z] Total : 25139.50 98.20 0.00 0.00 0.00 0.00 0.00 00:27:26.097 00:27:26.097 true 00:27:26.097 14:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eab55934-84cf-4eb0-beaf-af6835950d73 00:27:26.097 14:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:27:26.358 14:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:27:26.358 14:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:27:26.358 14:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1093556 00:27:26.925 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:26.925 Nvme0n1 : 3.00 25163.33 98.29 0.00 0.00 0.00 0.00 0.00 00:27:26.925 [2024-11-06T13:11:06.209Z] =================================================================================================================== 00:27:26.925 [2024-11-06T13:11:06.209Z] Total : 25163.33 98.29 0.00 0.00 0.00 0.00 0.00 00:27:26.925 00:27:28.302 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:28.302 Nvme0n1 : 4.00 25207.00 98.46 0.00 0.00 0.00 0.00 0.00 00:27:28.302 [2024-11-06T13:11:07.586Z] =================================================================================================================== 00:27:28.302 [2024-11-06T13:11:07.586Z] Total : 25207.00 98.46 0.00 0.00 0.00 0.00 0.00 00:27:28.302 00:27:29.238 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:29.238 Nvme0n1 : 5.00 25246.60 98.62 0.00 0.00 0.00 0.00 0.00 00:27:29.238 [2024-11-06T13:11:08.522Z] =================================================================================================================== 00:27:29.238 [2024-11-06T13:11:08.522Z] Total : 25246.60 98.62 0.00 0.00 0.00 0.00 0.00 00:27:29.238 00:27:30.176 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:30.176 Nvme0n1 : 6.00 25272.17 98.72 0.00 0.00 0.00 0.00 0.00 00:27:30.176 [2024-11-06T13:11:09.460Z] =================================================================================================================== 00:27:30.176 [2024-11-06T13:11:09.460Z] Total : 25272.17 98.72 0.00 0.00 0.00 0.00 0.00 00:27:30.176 00:27:31.113 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:31.113 Nvme0n1 : 7.00 25282.71 98.76 0.00 0.00 0.00 0.00 0.00 00:27:31.113 [2024-11-06T13:11:10.397Z] =================================================================================================================== 00:27:31.113 [2024-11-06T13:11:10.397Z] Total : 25282.71 98.76 0.00 0.00 0.00 0.00 0.00 00:27:31.113 00:27:32.052 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:32.052 Nvme0n1 : 8.00 25296.62 98.81 0.00 0.00 0.00 0.00 0.00 00:27:32.052 [2024-11-06T13:11:11.336Z] =================================================================================================================== 00:27:32.052 [2024-11-06T13:11:11.336Z] Total : 25296.62 98.81 0.00 0.00 0.00 0.00 0.00 00:27:32.052 00:27:32.991 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:32.991 Nvme0n1 : 9.00 25315.67 98.89 0.00 0.00 0.00 0.00 0.00 00:27:32.991 [2024-11-06T13:11:12.275Z] =================================================================================================================== 00:27:32.991 [2024-11-06T13:11:12.275Z] Total : 25315.67 98.89 0.00 0.00 0.00 0.00 0.00 00:27:32.991 00:27:33.931 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:33.931 Nvme0n1 : 10.00 25324.10 98.92 0.00 0.00 0.00 0.00 0.00 00:27:33.931 [2024-11-06T13:11:13.215Z] =================================================================================================================== 00:27:33.931 [2024-11-06T13:11:13.215Z] Total : 25324.10 98.92 0.00 0.00 0.00 0.00 0.00 00:27:33.931 00:27:34.191 00:27:34.191 Latency(us) 00:27:34.191 [2024-11-06T13:11:13.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:34.191 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:34.191 Nvme0n1 : 10.00 25330.23 98.95 0.00 0.00 5050.54 1583.79 10321.92 00:27:34.191 [2024-11-06T13:11:13.475Z] =================================================================================================================== 00:27:34.191 [2024-11-06T13:11:13.475Z] Total : 25330.23 98.95 0.00 0.00 5050.54 1583.79 10321.92 00:27:34.191 { 00:27:34.191 "results": [ 00:27:34.191 { 00:27:34.191 "job": "Nvme0n1", 00:27:34.191 "core_mask": "0x2", 00:27:34.191 "workload": "randwrite", 00:27:34.191 "status": "finished", 00:27:34.191 "queue_depth": 128, 00:27:34.191 "io_size": 4096, 00:27:34.191 "runtime": 10.002635, 00:27:34.191 "iops": 25330.22548558455, 00:27:34.191 "mibps": 98.94619330306465, 00:27:34.191 "io_failed": 0, 00:27:34.191 "io_timeout": 0, 00:27:34.191 "avg_latency_us": 5050.54136304494, 00:27:34.191 "min_latency_us": 1583.7866666666666, 00:27:34.191 "max_latency_us": 10321.92 00:27:34.191 } 00:27:34.191 ], 00:27:34.191 "core_count": 1 00:27:34.191 } 00:27:34.191 14:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1093431 00:27:34.191 14:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 1093431 ']' 00:27:34.191 14:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 1093431 00:27:34.191 14:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:27:34.191 14:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:34.191 14:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1093431 00:27:34.191 14:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:34.191 14:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:34.191 14:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1093431' 00:27:34.191 killing process with pid 1093431 00:27:34.191 14:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 1093431 00:27:34.191 Received shutdown signal, test time was about 10.000000 seconds 00:27:34.191 00:27:34.191 Latency(us) 00:27:34.191 [2024-11-06T13:11:13.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:34.191 [2024-11-06T13:11:13.475Z] =================================================================================================================== 00:27:34.191 [2024-11-06T13:11:13.475Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:34.191 14:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 1093431 00:27:34.191 14:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:34.451 14:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:34.451 14:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eab55934-84cf-4eb0-beaf-af6835950d73 00:27:34.451 14:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:27:34.712 14:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:27:34.712 14:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:27:34.712 14:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1089664 00:27:34.712 14:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1089664 00:27:34.712 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1089664 Killed "${NVMF_APP[@]}" "$@" 00:27:34.712 14:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:27:34.712 14:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:27:34.712 14:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:34.712 14:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:34.712 14:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:27:34.712 14:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1095861 00:27:34.712 14:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1095861 00:27:34.712 14:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:27:34.712 14:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 1095861 ']' 00:27:34.712 14:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:34.712 14:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:34.712 14:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:34.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:34.712 14:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:34.712 14:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:27:34.712 [2024-11-06 14:11:13.952956] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:34.712 [2024-11-06 14:11:13.953963] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:27:34.712 [2024-11-06 14:11:13.954010] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:34.972 [2024-11-06 14:11:14.026358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.972 [2024-11-06 14:11:14.057677] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:34.972 [2024-11-06 14:11:14.057708] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:34.972 [2024-11-06 14:11:14.057714] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:34.972 [2024-11-06 14:11:14.057719] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:34.972 [2024-11-06 14:11:14.057723] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:34.972 [2024-11-06 14:11:14.058216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:34.972 [2024-11-06 14:11:14.110097] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:34.972 [2024-11-06 14:11:14.110293] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:34.972 14:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:34.972 14:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:27:34.972 14:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:34.972 14:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:34.972 14:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:27:34.972 14:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:34.972 14:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:35.233 [2024-11-06 14:11:14.301000] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:27:35.233 [2024-11-06 14:11:14.301106] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:27:35.233 [2024-11-06 14:11:14.301133] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:27:35.233 14:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:27:35.233 14:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1503d22d-c1a6-4825-88c7-846ff902829e 00:27:35.233 14:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=1503d22d-c1a6-4825-88c7-846ff902829e 00:27:35.233 14:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:35.233 14:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:27:35.233 14:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:35.233 14:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:35.233 14:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:35.233 14:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1503d22d-c1a6-4825-88c7-846ff902829e -t 2000 00:27:35.493 [ 00:27:35.493 { 00:27:35.493 "name": "1503d22d-c1a6-4825-88c7-846ff902829e", 00:27:35.493 "aliases": [ 00:27:35.493 "lvs/lvol" 00:27:35.493 ], 00:27:35.493 "product_name": "Logical Volume", 00:27:35.493 "block_size": 4096, 00:27:35.493 "num_blocks": 38912, 00:27:35.493 "uuid": "1503d22d-c1a6-4825-88c7-846ff902829e", 00:27:35.493 "assigned_rate_limits": { 00:27:35.493 "rw_ios_per_sec": 0, 00:27:35.493 "rw_mbytes_per_sec": 0, 00:27:35.493 "r_mbytes_per_sec": 0, 00:27:35.493 "w_mbytes_per_sec": 0 00:27:35.493 }, 00:27:35.493 "claimed": false, 00:27:35.493 "zoned": false, 00:27:35.493 "supported_io_types": { 00:27:35.493 "read": true, 00:27:35.493 "write": true, 00:27:35.493 "unmap": true, 00:27:35.493 "flush": false, 00:27:35.493 "reset": true, 00:27:35.493 "nvme_admin": false, 00:27:35.493 "nvme_io": false, 00:27:35.493 "nvme_io_md": false, 00:27:35.493 "write_zeroes": true, 00:27:35.493 "zcopy": false, 00:27:35.493 "get_zone_info": false, 00:27:35.493 "zone_management": false, 00:27:35.493 "zone_append": false, 00:27:35.493 "compare": false, 00:27:35.493 "compare_and_write": false, 00:27:35.493 "abort": false, 00:27:35.493 "seek_hole": true, 00:27:35.493 "seek_data": true, 00:27:35.493 "copy": false, 00:27:35.493 "nvme_iov_md": false 00:27:35.493 }, 00:27:35.493 "driver_specific": { 00:27:35.493 "lvol": { 00:27:35.493 "lvol_store_uuid": "eab55934-84cf-4eb0-beaf-af6835950d73", 00:27:35.493 "base_bdev": "aio_bdev", 00:27:35.493 "thin_provision": false, 00:27:35.493 "num_allocated_clusters": 38, 00:27:35.493 "snapshot": false, 00:27:35.493 "clone": false, 00:27:35.493 "esnap_clone": false 00:27:35.493 } 00:27:35.493 } 00:27:35.493 } 00:27:35.493 ] 00:27:35.493 14:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:27:35.493 14:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eab55934-84cf-4eb0-beaf-af6835950d73 00:27:35.493 14:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:27:35.753 14:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:27:35.753 14:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eab55934-84cf-4eb0-beaf-af6835950d73 00:27:35.753 14:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:27:35.753 14:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:27:35.753 14:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:27:36.013 [2024-11-06 14:11:15.110735] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:27:36.013 14:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eab55934-84cf-4eb0-beaf-af6835950d73 00:27:36.013 14:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:27:36.014 14:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eab55934-84cf-4eb0-beaf-af6835950d73 00:27:36.014 14:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:36.014 14:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:36.014 14:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:36.014 14:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:36.014 14:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:36.014 14:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:36.014 14:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:36.014 14:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:27:36.014 14:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eab55934-84cf-4eb0-beaf-af6835950d73 00:27:36.014 request: 00:27:36.014 { 00:27:36.014 "uuid": "eab55934-84cf-4eb0-beaf-af6835950d73", 00:27:36.014 "method": "bdev_lvol_get_lvstores", 00:27:36.014 "req_id": 1 00:27:36.014 } 00:27:36.014 Got JSON-RPC error response 00:27:36.014 response: 00:27:36.014 { 00:27:36.014 "code": -19, 00:27:36.014 "message": "No such device" 00:27:36.014 } 00:27:36.274 14:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:27:36.274 14:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:36.274 14:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:36.274 14:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:36.274 14:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:36.274 aio_bdev 00:27:36.274 14:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1503d22d-c1a6-4825-88c7-846ff902829e 00:27:36.274 14:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=1503d22d-c1a6-4825-88c7-846ff902829e 00:27:36.274 14:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:36.274 14:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:27:36.274 14:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:36.274 14:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:36.274 14:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:36.534 14:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1503d22d-c1a6-4825-88c7-846ff902829e -t 2000 00:27:36.534 [ 00:27:36.534 { 00:27:36.534 "name": "1503d22d-c1a6-4825-88c7-846ff902829e", 00:27:36.534 "aliases": [ 00:27:36.534 "lvs/lvol" 00:27:36.534 ], 00:27:36.534 "product_name": "Logical Volume", 00:27:36.534 "block_size": 4096, 00:27:36.534 "num_blocks": 38912, 00:27:36.534 "uuid": "1503d22d-c1a6-4825-88c7-846ff902829e", 00:27:36.534 "assigned_rate_limits": { 00:27:36.534 "rw_ios_per_sec": 0, 00:27:36.534 "rw_mbytes_per_sec": 0, 00:27:36.534 "r_mbytes_per_sec": 0, 00:27:36.534 "w_mbytes_per_sec": 0 00:27:36.534 }, 00:27:36.534 "claimed": false, 00:27:36.534 "zoned": false, 00:27:36.534 "supported_io_types": { 00:27:36.534 "read": true, 00:27:36.534 "write": true, 00:27:36.534 "unmap": true, 00:27:36.534 "flush": false, 00:27:36.534 "reset": true, 00:27:36.534 "nvme_admin": false, 00:27:36.534 "nvme_io": false, 00:27:36.534 "nvme_io_md": false, 00:27:36.534 "write_zeroes": true, 00:27:36.534 "zcopy": false, 00:27:36.534 "get_zone_info": false, 00:27:36.534 "zone_management": false, 00:27:36.534 "zone_append": false, 00:27:36.534 "compare": false, 00:27:36.534 "compare_and_write": false, 00:27:36.534 "abort": false, 00:27:36.534 "seek_hole": true, 00:27:36.534 "seek_data": true, 00:27:36.534 "copy": false, 00:27:36.534 "nvme_iov_md": false 00:27:36.534 }, 00:27:36.534 "driver_specific": { 00:27:36.534 "lvol": { 00:27:36.534 "lvol_store_uuid": "eab55934-84cf-4eb0-beaf-af6835950d73", 00:27:36.534 "base_bdev": "aio_bdev", 00:27:36.534 "thin_provision": false, 00:27:36.534 "num_allocated_clusters": 38, 00:27:36.534 "snapshot": false, 00:27:36.534 "clone": false, 00:27:36.534 "esnap_clone": false 00:27:36.534 } 00:27:36.534 } 00:27:36.534 } 00:27:36.534 ] 00:27:36.534 14:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:27:36.535 14:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eab55934-84cf-4eb0-beaf-af6835950d73 00:27:36.535 14:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:27:36.794 14:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:27:36.794 14:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eab55934-84cf-4eb0-beaf-af6835950d73 00:27:36.794 14:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:27:37.053 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:27:37.053 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1503d22d-c1a6-4825-88c7-846ff902829e 00:27:37.053 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eab55934-84cf-4eb0-beaf-af6835950d73 00:27:37.312 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:27:37.572 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:37.572 00:27:37.572 real 0m15.934s 00:27:37.572 user 0m34.202s 00:27:37.572 sys 0m2.694s 00:27:37.572 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:37.572 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:27:37.572 ************************************ 00:27:37.572 END TEST lvs_grow_dirty 00:27:37.572 ************************************ 00:27:37.572 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:27:37.572 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:27:37.572 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:27:37.572 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:27:37.572 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:27:37.572 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:27:37.572 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:27:37.572 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:27:37.572 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:27:37.572 nvmf_trace.0 00:27:37.572 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:27:37.572 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:27:37.572 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:37.572 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:27:37.572 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:37.572 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:27:37.572 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:37.572 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:37.572 rmmod nvme_tcp 00:27:37.572 rmmod nvme_fabrics 00:27:37.572 rmmod nvme_keyring 00:27:37.572 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:37.572 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:27:37.572 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:27:37.572 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1095861 ']' 00:27:37.572 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1095861 00:27:37.572 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 1095861 ']' 00:27:37.572 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 1095861 00:27:37.572 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:27:37.572 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:37.572 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1095861 00:27:37.572 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:37.572 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:37.572 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1095861' 00:27:37.572 killing process with pid 1095861 00:27:37.572 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 1095861 00:27:37.572 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 1095861 00:27:37.832 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:37.832 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:37.832 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:37.832 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:27:37.832 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:27:37.832 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:37.832 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:27:37.832 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:37.832 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:37.832 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.832 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:37.832 14:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.755 14:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:39.755 00:27:39.755 real 0m39.496s 00:27:39.755 user 0m51.190s 00:27:39.755 sys 0m8.184s 00:27:39.755 14:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:39.755 14:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:39.755 ************************************ 00:27:39.755 END TEST nvmf_lvs_grow 00:27:39.755 ************************************ 00:27:39.755 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:27:39.755 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:39.755 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:39.755 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:40.073 ************************************ 00:27:40.073 START TEST nvmf_bdev_io_wait 00:27:40.073 ************************************ 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:27:40.073 * Looking for test storage... 00:27:40.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:40.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.073 --rc genhtml_branch_coverage=1 00:27:40.073 --rc genhtml_function_coverage=1 00:27:40.073 --rc genhtml_legend=1 00:27:40.073 --rc geninfo_all_blocks=1 00:27:40.073 --rc geninfo_unexecuted_blocks=1 00:27:40.073 00:27:40.073 ' 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:40.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.073 --rc genhtml_branch_coverage=1 00:27:40.073 --rc genhtml_function_coverage=1 00:27:40.073 --rc genhtml_legend=1 00:27:40.073 --rc geninfo_all_blocks=1 00:27:40.073 --rc geninfo_unexecuted_blocks=1 00:27:40.073 00:27:40.073 ' 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:40.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.073 --rc genhtml_branch_coverage=1 00:27:40.073 --rc genhtml_function_coverage=1 00:27:40.073 --rc genhtml_legend=1 00:27:40.073 --rc geninfo_all_blocks=1 00:27:40.073 --rc geninfo_unexecuted_blocks=1 00:27:40.073 00:27:40.073 ' 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:40.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.073 --rc genhtml_branch_coverage=1 00:27:40.073 --rc genhtml_function_coverage=1 00:27:40.073 --rc genhtml_legend=1 00:27:40.073 --rc geninfo_all_blocks=1 00:27:40.073 --rc geninfo_unexecuted_blocks=1 00:27:40.073 00:27:40.073 ' 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:40.073 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:27:40.074 14:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:45.386 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:45.386 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.386 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:45.387 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:45.387 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.387 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:45.387 Found net devices under 0000:31:00.0: cvl_0_0 00:27:45.387 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.387 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:45.387 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.387 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:45.387 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.387 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:45.387 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:45.387 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.387 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:45.387 Found net devices under 0000:31:00.1: cvl_0_1 00:27:45.387 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.387 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:45.387 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:27:45.387 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:45.387 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:45.387 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:45.387 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:45.387 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:45.387 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:45.387 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:45.387 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:45.387 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:45.387 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:45.387 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:45.387 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:45.387 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:45.387 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:45.387 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:45.387 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:45.387 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:45.387 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:45.646 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:45.646 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:45.646 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:45.646 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:45.646 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:45.646 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:45.646 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:45.646 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:45.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:45.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.590 ms 00:27:45.646 00:27:45.646 --- 10.0.0.2 ping statistics --- 00:27:45.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.646 rtt min/avg/max/mdev = 0.590/0.590/0.590/0.000 ms 00:27:45.646 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:45.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:45.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:27:45.646 00:27:45.646 --- 10.0.0.1 ping statistics --- 00:27:45.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.646 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:27:45.646 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:45.646 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:27:45.646 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:45.646 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:45.646 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:45.646 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:45.646 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:45.646 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:45.646 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:45.647 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:45.647 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:45.647 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:45.647 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:45.647 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1100862 00:27:45.647 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1100862 00:27:45.647 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 1100862 ']' 00:27:45.647 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:45.647 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:45.647 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:45.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:45.647 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:45.647 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:45.647 14:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:27:45.906 [2024-11-06 14:11:24.939363] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:45.906 [2024-11-06 14:11:24.940525] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:27:45.906 [2024-11-06 14:11:24.940575] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:45.906 [2024-11-06 14:11:25.034398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:45.906 [2024-11-06 14:11:25.089547] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:45.906 [2024-11-06 14:11:25.089603] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:45.906 [2024-11-06 14:11:25.089613] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:45.906 [2024-11-06 14:11:25.089620] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:45.906 [2024-11-06 14:11:25.089627] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:45.906 [2024-11-06 14:11:25.091732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.906 [2024-11-06 14:11:25.091895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:45.906 [2024-11-06 14:11:25.092048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:45.906 [2024-11-06 14:11:25.092053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.906 [2024-11-06 14:11:25.092482] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:46.474 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:46.474 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:27:46.474 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:46.474 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:46.474 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:46.734 [2024-11-06 14:11:25.837285] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:46.734 [2024-11-06 14:11:25.837560] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:46.734 [2024-11-06 14:11:25.837884] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:27:46.734 [2024-11-06 14:11:25.837923] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:46.734 [2024-11-06 14:11:25.844729] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:46.734 Malloc0 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:46.734 [2024-11-06 14:11:25.896909] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1101199 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1101202 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1101203 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:46.734 { 00:27:46.734 "params": { 00:27:46.734 "name": "Nvme$subsystem", 00:27:46.734 "trtype": "$TEST_TRANSPORT", 00:27:46.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.734 "adrfam": "ipv4", 00:27:46.734 "trsvcid": "$NVMF_PORT", 00:27:46.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.734 "hdgst": ${hdgst:-false}, 00:27:46.734 "ddgst": ${ddgst:-false} 00:27:46.734 }, 00:27:46.734 "method": "bdev_nvme_attach_controller" 00:27:46.734 } 00:27:46.734 EOF 00:27:46.734 )") 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1101206 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:46.734 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:46.734 { 00:27:46.734 "params": { 00:27:46.734 "name": "Nvme$subsystem", 00:27:46.734 "trtype": "$TEST_TRANSPORT", 00:27:46.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.734 "adrfam": "ipv4", 00:27:46.734 "trsvcid": "$NVMF_PORT", 00:27:46.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.735 "hdgst": ${hdgst:-false}, 00:27:46.735 "ddgst": ${ddgst:-false} 00:27:46.735 }, 00:27:46.735 "method": "bdev_nvme_attach_controller" 00:27:46.735 } 00:27:46.735 EOF 00:27:46.735 )") 00:27:46.735 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:27:46.735 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:27:46.735 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:27:46.735 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:46.735 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:27:46.735 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:46.735 { 00:27:46.735 "params": { 00:27:46.735 "name": "Nvme$subsystem", 00:27:46.735 "trtype": "$TEST_TRANSPORT", 00:27:46.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.735 "adrfam": "ipv4", 00:27:46.735 "trsvcid": "$NVMF_PORT", 00:27:46.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.735 "hdgst": ${hdgst:-false}, 00:27:46.735 "ddgst": ${ddgst:-false} 00:27:46.735 }, 00:27:46.735 "method": "bdev_nvme_attach_controller" 00:27:46.735 } 00:27:46.735 EOF 00:27:46.735 )") 00:27:46.735 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:27:46.735 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:27:46.735 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:27:46.735 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:27:46.735 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:27:46.735 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:46.735 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:46.735 { 00:27:46.735 "params": { 00:27:46.735 "name": "Nvme$subsystem", 00:27:46.735 "trtype": "$TEST_TRANSPORT", 00:27:46.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.735 "adrfam": "ipv4", 00:27:46.735 "trsvcid": "$NVMF_PORT", 00:27:46.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.735 "hdgst": ${hdgst:-false}, 00:27:46.735 "ddgst": ${ddgst:-false} 00:27:46.735 }, 00:27:46.735 "method": "bdev_nvme_attach_controller" 00:27:46.735 } 00:27:46.735 EOF 00:27:46.735 )") 00:27:46.735 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1101199 00:27:46.735 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:27:46.735 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:27:46.735 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:27:46.735 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:27:46.735 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:27:46.735 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:27:46.735 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:46.735 "params": { 00:27:46.735 "name": "Nvme1", 00:27:46.735 "trtype": "tcp", 00:27:46.735 "traddr": "10.0.0.2", 00:27:46.735 "adrfam": "ipv4", 00:27:46.735 "trsvcid": "4420", 00:27:46.735 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:46.735 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:46.735 "hdgst": false, 00:27:46.735 "ddgst": false 00:27:46.735 }, 00:27:46.735 "method": "bdev_nvme_attach_controller" 00:27:46.735 }' 00:27:46.735 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:27:46.735 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:27:46.735 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:46.735 "params": { 00:27:46.735 "name": "Nvme1", 00:27:46.735 "trtype": "tcp", 00:27:46.735 "traddr": "10.0.0.2", 00:27:46.735 "adrfam": "ipv4", 00:27:46.735 "trsvcid": "4420", 00:27:46.735 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:46.735 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:46.735 "hdgst": false, 00:27:46.735 "ddgst": false 00:27:46.735 }, 00:27:46.735 "method": "bdev_nvme_attach_controller" 00:27:46.735 }' 00:27:46.735 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:27:46.735 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:46.735 "params": { 00:27:46.735 "name": "Nvme1", 00:27:46.735 "trtype": "tcp", 00:27:46.735 "traddr": "10.0.0.2", 00:27:46.735 "adrfam": "ipv4", 00:27:46.735 "trsvcid": "4420", 00:27:46.735 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:46.735 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:46.735 "hdgst": false, 00:27:46.735 "ddgst": false 00:27:46.735 }, 00:27:46.735 "method": "bdev_nvme_attach_controller" 00:27:46.735 }' 00:27:46.735 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:27:46.735 14:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:46.735 "params": { 00:27:46.735 "name": "Nvme1", 00:27:46.735 "trtype": "tcp", 00:27:46.735 "traddr": "10.0.0.2", 00:27:46.735 "adrfam": "ipv4", 00:27:46.735 "trsvcid": "4420", 00:27:46.735 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:46.735 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:46.735 "hdgst": false, 00:27:46.735 "ddgst": false 00:27:46.735 }, 00:27:46.735 "method": "bdev_nvme_attach_controller" 00:27:46.735 }' 00:27:46.735 [2024-11-06 14:11:25.938163] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:27:46.735 [2024-11-06 14:11:25.938224] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:27:46.735 [2024-11-06 14:11:25.939442] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:27:46.735 [2024-11-06 14:11:25.939504] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:46.735 [2024-11-06 14:11:25.943029] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:27:46.735 [2024-11-06 14:11:25.943080] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:27:46.735 [2024-11-06 14:11:25.943103] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:27:46.735 [2024-11-06 14:11:25.943151] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:27:46.995 [2024-11-06 14:11:26.143602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.995 [2024-11-06 14:11:26.181854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:46.995 [2024-11-06 14:11:26.228593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.995 [2024-11-06 14:11:26.267411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:47.253 [2024-11-06 14:11:26.289313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.253 [2024-11-06 14:11:26.324640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:47.253 [2024-11-06 14:11:26.343668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.253 [2024-11-06 14:11:26.377851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:47.253 Running I/O for 1 seconds... 00:27:47.253 Running I/O for 1 seconds... 00:27:47.253 Running I/O for 1 seconds... 00:27:47.512 Running I/O for 1 seconds... 00:27:48.449 12143.00 IOPS, 47.43 MiB/s 00:27:48.449 Latency(us) 00:27:48.449 [2024-11-06T13:11:27.733Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:48.449 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:27:48.449 Nvme1n1 : 1.01 12203.98 47.67 0.00 0.00 10452.74 4887.89 12178.77 00:27:48.449 [2024-11-06T13:11:27.733Z] =================================================================================================================== 00:27:48.449 [2024-11-06T13:11:27.733Z] Total : 12203.98 47.67 0.00 0.00 10452.74 4887.89 12178.77 00:27:48.449 9794.00 IOPS, 38.26 MiB/s 00:27:48.449 Latency(us) 00:27:48.449 [2024-11-06T13:11:27.733Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:48.449 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:27:48.449 Nvme1n1 : 1.01 9868.96 38.55 0.00 0.00 12921.98 2211.84 16602.45 00:27:48.449 [2024-11-06T13:11:27.733Z] =================================================================================================================== 00:27:48.449 [2024-11-06T13:11:27.733Z] Total : 9868.96 38.55 0.00 0.00 12921.98 2211.84 16602.45 00:27:48.449 10029.00 IOPS, 39.18 MiB/s 00:27:48.449 Latency(us) 00:27:48.450 [2024-11-06T13:11:27.734Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:48.450 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:27:48.450 Nvme1n1 : 1.01 10094.86 39.43 0.00 0.00 12636.58 4778.67 19114.67 00:27:48.450 [2024-11-06T13:11:27.734Z] =================================================================================================================== 00:27:48.450 [2024-11-06T13:11:27.734Z] Total : 10094.86 39.43 0.00 0.00 12636.58 4778.67 19114.67 00:27:48.450 187296.00 IOPS, 731.62 MiB/s 00:27:48.450 Latency(us) 00:27:48.450 [2024-11-06T13:11:27.734Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:48.450 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:27:48.450 Nvme1n1 : 1.00 186920.44 730.16 0.00 0.00 681.24 305.49 1979.73 00:27:48.450 [2024-11-06T13:11:27.734Z] =================================================================================================================== 00:27:48.450 [2024-11-06T13:11:27.734Z] Total : 186920.44 730.16 0.00 0.00 681.24 305.49 1979.73 00:27:48.450 14:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1101202 00:27:48.450 14:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1101203 00:27:48.708 14:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1101206 00:27:48.708 14:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:48.708 14:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.708 14:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:48.709 14:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.709 14:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:27:48.709 14:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:27:48.709 14:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:48.709 14:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:27:48.709 14:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:48.709 14:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:27:48.709 14:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:48.709 14:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:48.709 rmmod nvme_tcp 00:27:48.709 rmmod nvme_fabrics 00:27:48.709 rmmod nvme_keyring 00:27:48.709 14:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:48.709 14:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:27:48.709 14:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:27:48.709 14:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1100862 ']' 00:27:48.709 14:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1100862 00:27:48.709 14:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 1100862 ']' 00:27:48.709 14:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 1100862 00:27:48.709 14:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:27:48.709 14:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:48.709 14:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1100862 00:27:48.709 14:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:48.709 14:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:48.709 14:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1100862' 00:27:48.709 killing process with pid 1100862 00:27:48.709 14:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 1100862 00:27:48.709 14:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 1100862 00:27:48.968 14:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:48.968 14:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:48.968 14:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:48.968 14:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:27:48.968 14:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:48.968 14:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:27:48.968 14:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:27:48.968 14:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:48.968 14:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:48.968 14:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:48.968 14:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:48.968 14:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.872 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:50.872 00:27:50.872 real 0m11.036s 00:27:50.872 user 0m14.919s 00:27:50.872 sys 0m6.174s 00:27:50.872 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:50.872 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:50.872 ************************************ 00:27:50.872 END TEST nvmf_bdev_io_wait 00:27:50.872 ************************************ 00:27:50.872 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:27:50.872 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:50.872 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:50.872 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:50.872 ************************************ 00:27:50.872 START TEST nvmf_queue_depth 00:27:50.872 ************************************ 00:27:50.872 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:27:51.130 * Looking for test storage... 00:27:51.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:51.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.130 --rc genhtml_branch_coverage=1 00:27:51.130 --rc genhtml_function_coverage=1 00:27:51.130 --rc genhtml_legend=1 00:27:51.130 --rc geninfo_all_blocks=1 00:27:51.130 --rc geninfo_unexecuted_blocks=1 00:27:51.130 00:27:51.130 ' 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:51.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.130 --rc genhtml_branch_coverage=1 00:27:51.130 --rc genhtml_function_coverage=1 00:27:51.130 --rc genhtml_legend=1 00:27:51.130 --rc geninfo_all_blocks=1 00:27:51.130 --rc geninfo_unexecuted_blocks=1 00:27:51.130 00:27:51.130 ' 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:51.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.130 --rc genhtml_branch_coverage=1 00:27:51.130 --rc genhtml_function_coverage=1 00:27:51.130 --rc genhtml_legend=1 00:27:51.130 --rc geninfo_all_blocks=1 00:27:51.130 --rc geninfo_unexecuted_blocks=1 00:27:51.130 00:27:51.130 ' 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:51.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.130 --rc genhtml_branch_coverage=1 00:27:51.130 --rc genhtml_function_coverage=1 00:27:51.130 --rc genhtml_legend=1 00:27:51.130 --rc geninfo_all_blocks=1 00:27:51.130 --rc geninfo_unexecuted_blocks=1 00:27:51.130 00:27:51.130 ' 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:51.130 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.131 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:51.131 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.131 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:51.131 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:51.131 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:27:51.131 14:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:56.402 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:56.402 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:56.402 Found net devices under 0000:31:00.0: cvl_0_0 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:56.402 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:56.403 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:56.403 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:56.403 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:56.403 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:56.403 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:56.403 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:56.403 Found net devices under 0000:31:00.1: cvl_0_1 00:27:56.403 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:56.403 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:56.403 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:27:56.403 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:56.403 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:56.403 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:56.403 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:56.403 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:56.403 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:56.403 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:56.403 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:56.403 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:56.403 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:56.403 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:56.403 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:56.403 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:56.403 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:56.403 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:56.403 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:56.403 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:56.403 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:56.661 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:56.661 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:56.661 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:56.661 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:56.661 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:56.661 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:56.661 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:56.661 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:56.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:56.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.577 ms 00:27:56.661 00:27:56.661 --- 10.0.0.2 ping statistics --- 00:27:56.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.661 rtt min/avg/max/mdev = 0.577/0.577/0.577/0.000 ms 00:27:56.661 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:56.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:56.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:27:56.661 00:27:56.661 --- 10.0.0.1 ping statistics --- 00:27:56.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.661 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:27:56.661 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:56.661 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:27:56.661 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:56.661 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:56.661 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:56.661 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:56.661 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:56.661 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:56.661 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:56.661 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:27:56.661 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:56.661 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:56.661 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:56.661 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1105906 00:27:56.661 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1105906 00:27:56.661 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 1105906 ']' 00:27:56.661 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:56.661 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:56.661 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:56.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:56.661 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:56.661 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:56.661 14:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:27:56.661 [2024-11-06 14:11:35.873843] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:56.661 [2024-11-06 14:11:35.874818] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:27:56.661 [2024-11-06 14:11:35.874855] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:56.919 [2024-11-06 14:11:35.961254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.919 [2024-11-06 14:11:35.996660] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:56.919 [2024-11-06 14:11:35.996690] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:56.919 [2024-11-06 14:11:35.996698] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:56.919 [2024-11-06 14:11:35.996705] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:56.919 [2024-11-06 14:11:35.996710] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:56.919 [2024-11-06 14:11:35.997265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:56.919 [2024-11-06 14:11:36.052922] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:56.919 [2024-11-06 14:11:36.053169] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:57.488 14:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:57.488 14:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:27:57.488 14:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:57.488 14:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:57.488 14:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:57.488 14:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:57.488 14:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:57.488 14:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.488 14:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:57.488 [2024-11-06 14:11:36.702020] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:57.488 14:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.488 14:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:57.488 14:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.488 14:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:57.488 Malloc0 00:27:57.488 14:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.488 14:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:57.488 14:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.488 14:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:57.488 14:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.488 14:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:57.488 14:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.488 14:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:57.488 14:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.488 14:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:57.488 14:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.488 14:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:57.488 [2024-11-06 14:11:36.757855] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:57.488 14:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.488 14:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1106251 00:27:57.488 14:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:57.488 14:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1106251 /var/tmp/bdevperf.sock 00:27:57.488 14:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 1106251 ']' 00:27:57.488 14:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:57.488 14:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:57.488 14:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:57.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:57.488 14:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:27:57.488 14:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:57.488 14:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:57.749 [2024-11-06 14:11:36.799738] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:27:57.749 [2024-11-06 14:11:36.799802] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1106251 ] 00:27:57.749 [2024-11-06 14:11:36.884201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.749 [2024-11-06 14:11:36.937068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.315 14:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:58.316 14:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:27:58.316 14:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:58.316 14:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.316 14:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:58.574 NVMe0n1 00:27:58.574 14:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.574 14:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:58.833 Running I/O for 10 seconds... 00:28:00.704 10240.00 IOPS, 40.00 MiB/s [2024-11-06T13:11:40.924Z] 11270.00 IOPS, 44.02 MiB/s [2024-11-06T13:11:42.299Z] 12064.33 IOPS, 47.13 MiB/s [2024-11-06T13:11:43.236Z] 12519.50 IOPS, 48.90 MiB/s [2024-11-06T13:11:44.175Z] 12711.60 IOPS, 49.65 MiB/s [2024-11-06T13:11:45.113Z] 12876.17 IOPS, 50.30 MiB/s [2024-11-06T13:11:46.091Z] 13010.14 IOPS, 50.82 MiB/s [2024-11-06T13:11:47.031Z] 13064.50 IOPS, 51.03 MiB/s [2024-11-06T13:11:47.981Z] 13160.89 IOPS, 51.41 MiB/s [2024-11-06T13:11:47.981Z] 13215.60 IOPS, 51.62 MiB/s 00:28:08.697 Latency(us) 00:28:08.697 [2024-11-06T13:11:47.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.697 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:28:08.697 Verification LBA range: start 0x0 length 0x4000 00:28:08.697 NVMe0n1 : 10.05 13252.94 51.77 0.00 0.00 77013.00 18896.21 62914.56 00:28:08.697 [2024-11-06T13:11:47.981Z] =================================================================================================================== 00:28:08.697 [2024-11-06T13:11:47.981Z] Total : 13252.94 51.77 0.00 0.00 77013.00 18896.21 62914.56 00:28:08.697 { 00:28:08.697 "results": [ 00:28:08.697 { 00:28:08.697 "job": "NVMe0n1", 00:28:08.697 "core_mask": "0x1", 00:28:08.697 "workload": "verify", 00:28:08.697 "status": "finished", 00:28:08.697 "verify_range": { 00:28:08.697 "start": 0, 00:28:08.697 "length": 16384 00:28:08.697 }, 00:28:08.697 "queue_depth": 1024, 00:28:08.697 "io_size": 4096, 00:28:08.697 "runtime": 10.049088, 00:28:08.697 "iops": 13252.943948744403, 00:28:08.697 "mibps": 51.769312299782825, 00:28:08.697 "io_failed": 0, 00:28:08.697 "io_timeout": 0, 00:28:08.697 "avg_latency_us": 77013.00266186114, 00:28:08.697 "min_latency_us": 18896.213333333333, 00:28:08.697 "max_latency_us": 62914.56 00:28:08.697 } 00:28:08.697 ], 00:28:08.697 "core_count": 1 00:28:08.697 } 00:28:08.697 14:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1106251 00:28:08.697 14:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 1106251 ']' 00:28:08.697 14:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 1106251 00:28:08.697 14:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:28:08.958 14:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:08.958 14:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1106251 00:28:08.958 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:08.958 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:08.958 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1106251' 00:28:08.958 killing process with pid 1106251 00:28:08.959 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 1106251 00:28:08.959 Received shutdown signal, test time was about 10.000000 seconds 00:28:08.959 00:28:08.959 Latency(us) 00:28:08.959 [2024-11-06T13:11:48.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.959 [2024-11-06T13:11:48.243Z] =================================================================================================================== 00:28:08.959 [2024-11-06T13:11:48.243Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:08.959 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 1106251 00:28:08.959 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:28:08.959 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:28:08.959 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:08.959 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:28:08.959 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:08.959 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:28:08.959 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:08.959 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:08.959 rmmod nvme_tcp 00:28:08.959 rmmod nvme_fabrics 00:28:08.959 rmmod nvme_keyring 00:28:08.959 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:08.959 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:28:08.959 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:28:08.959 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1105906 ']' 00:28:08.959 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1105906 00:28:08.959 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 1105906 ']' 00:28:08.959 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 1105906 00:28:08.959 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:28:08.959 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:08.959 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1105906 00:28:08.959 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:08.959 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:08.959 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1105906' 00:28:08.959 killing process with pid 1105906 00:28:08.959 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 1105906 00:28:08.959 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 1105906 00:28:09.220 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:09.220 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:09.220 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:09.220 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:28:09.220 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:28:09.220 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:09.220 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:28:09.220 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:09.220 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:09.220 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.220 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:09.220 14:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.154 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:11.154 00:28:11.154 real 0m20.257s 00:28:11.154 user 0m23.941s 00:28:11.154 sys 0m5.708s 00:28:11.154 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:11.154 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:11.154 ************************************ 00:28:11.154 END TEST nvmf_queue_depth 00:28:11.154 ************************************ 00:28:11.154 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:28:11.154 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:28:11.154 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:11.154 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:11.154 ************************************ 00:28:11.154 START TEST nvmf_target_multipath 00:28:11.154 ************************************ 00:28:11.154 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:28:11.414 * Looking for test storage... 00:28:11.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:11.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.414 --rc genhtml_branch_coverage=1 00:28:11.414 --rc genhtml_function_coverage=1 00:28:11.414 --rc genhtml_legend=1 00:28:11.414 --rc geninfo_all_blocks=1 00:28:11.414 --rc geninfo_unexecuted_blocks=1 00:28:11.414 00:28:11.414 ' 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:11.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.414 --rc genhtml_branch_coverage=1 00:28:11.414 --rc genhtml_function_coverage=1 00:28:11.414 --rc genhtml_legend=1 00:28:11.414 --rc geninfo_all_blocks=1 00:28:11.414 --rc geninfo_unexecuted_blocks=1 00:28:11.414 00:28:11.414 ' 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:11.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.414 --rc genhtml_branch_coverage=1 00:28:11.414 --rc genhtml_function_coverage=1 00:28:11.414 --rc genhtml_legend=1 00:28:11.414 --rc geninfo_all_blocks=1 00:28:11.414 --rc geninfo_unexecuted_blocks=1 00:28:11.414 00:28:11.414 ' 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:11.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.414 --rc genhtml_branch_coverage=1 00:28:11.414 --rc genhtml_function_coverage=1 00:28:11.414 --rc genhtml_legend=1 00:28:11.414 --rc geninfo_all_blocks=1 00:28:11.414 --rc geninfo_unexecuted_blocks=1 00:28:11.414 00:28:11.414 ' 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:11.414 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:11.415 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:28:11.415 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:11.415 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:11.415 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:11.415 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.415 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.415 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.415 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:28:11.415 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.415 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:28:11.415 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:11.415 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:11.415 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:11.415 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:11.415 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:11.415 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:11.415 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:11.415 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:11.415 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:11.415 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:11.415 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:11.415 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:11.415 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:11.415 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:11.415 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:28:11.415 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:11.415 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:11.415 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:11.415 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:11.415 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:11.415 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.415 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:11.415 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.415 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:11.415 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:11.415 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:28:11.415 14:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:16.728 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:16.728 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:28:16.728 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:16.728 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:16.728 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:16.728 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:16.728 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:16.728 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:28:16.728 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:16.728 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:28:16.728 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:28:16.728 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:28:16.728 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:28:16.728 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:28:16.728 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:28:16.728 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:16.728 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:16.728 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:16.728 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:16.728 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:16.728 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:16.728 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:16.728 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:16.728 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:16.728 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:16.728 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:16.728 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:16.728 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:16.728 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:16.728 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:16.728 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:16.728 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:16.728 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:16.728 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:16.729 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:16.729 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:16.729 Found net devices under 0000:31:00.0: cvl_0_0 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:16.729 Found net devices under 0000:31:00.1: cvl_0_1 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:16.729 14:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:16.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:16.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:28:16.729 00:28:16.729 --- 10.0.0.2 ping statistics --- 00:28:16.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:16.729 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:28:16.729 14:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:16.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:16.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:28:16.729 00:28:16.729 --- 10.0.0.1 ping statistics --- 00:28:16.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:16.729 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:28:16.729 14:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:16.729 14:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:28:16.729 14:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:16.729 14:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:16.729 14:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:16.729 14:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:16.729 14:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:16.729 14:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:16.729 14:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:16.988 14:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:28:16.988 14:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:28:16.988 only one NIC for nvmf test 00:28:16.988 14:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:28:16.988 14:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:16.988 14:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:28:16.988 14:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:16.989 14:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:28:16.989 14:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:16.989 14:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:16.989 rmmod nvme_tcp 00:28:16.989 rmmod nvme_fabrics 00:28:16.989 rmmod nvme_keyring 00:28:16.989 14:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:16.989 14:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:28:16.989 14:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:28:16.989 14:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:28:16.989 14:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:16.989 14:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:16.989 14:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:16.989 14:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:28:16.989 14:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:16.989 14:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:28:16.989 14:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:28:16.989 14:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:16.989 14:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:16.989 14:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.989 14:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:16.989 14:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:18.891 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:18.891 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:28:18.891 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:28:18.891 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:18.891 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:28:18.891 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:18.891 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:28:18.891 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:18.891 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:18.891 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:18.891 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:28:18.891 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:28:18.891 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:28:18.891 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:18.891 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:18.891 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:18.891 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:28:18.891 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:28:18.891 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:18.891 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:28:18.891 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:18.891 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:18.891 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:18.891 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:18.891 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:18.891 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:18.891 00:28:18.891 real 0m7.729s 00:28:18.891 user 0m1.465s 00:28:18.891 sys 0m4.125s 00:28:18.891 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:18.891 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:18.891 ************************************ 00:28:18.891 END TEST nvmf_target_multipath 00:28:18.891 ************************************ 00:28:19.151 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:28:19.151 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:28:19.151 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:19.151 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:19.151 ************************************ 00:28:19.151 START TEST nvmf_zcopy 00:28:19.151 ************************************ 00:28:19.151 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:28:19.151 * Looking for test storage... 00:28:19.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:19.151 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:19.151 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:28:19.151 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:19.151 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:19.151 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:19.151 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:19.151 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:19.151 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:28:19.151 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:28:19.151 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:28:19.151 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:28:19.151 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:19.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.152 --rc genhtml_branch_coverage=1 00:28:19.152 --rc genhtml_function_coverage=1 00:28:19.152 --rc genhtml_legend=1 00:28:19.152 --rc geninfo_all_blocks=1 00:28:19.152 --rc geninfo_unexecuted_blocks=1 00:28:19.152 00:28:19.152 ' 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:19.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.152 --rc genhtml_branch_coverage=1 00:28:19.152 --rc genhtml_function_coverage=1 00:28:19.152 --rc genhtml_legend=1 00:28:19.152 --rc geninfo_all_blocks=1 00:28:19.152 --rc geninfo_unexecuted_blocks=1 00:28:19.152 00:28:19.152 ' 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:19.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.152 --rc genhtml_branch_coverage=1 00:28:19.152 --rc genhtml_function_coverage=1 00:28:19.152 --rc genhtml_legend=1 00:28:19.152 --rc geninfo_all_blocks=1 00:28:19.152 --rc geninfo_unexecuted_blocks=1 00:28:19.152 00:28:19.152 ' 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:19.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.152 --rc genhtml_branch_coverage=1 00:28:19.152 --rc genhtml_function_coverage=1 00:28:19.152 --rc genhtml_legend=1 00:28:19.152 --rc geninfo_all_blocks=1 00:28:19.152 --rc geninfo_unexecuted_blocks=1 00:28:19.152 00:28:19.152 ' 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:19.152 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:19.153 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:28:19.153 14:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:24.429 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:24.429 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:24.430 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:24.430 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:24.430 Found net devices under 0000:31:00.0: cvl_0_0 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:24.430 Found net devices under 0000:31:00.1: cvl_0_1 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:24.430 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:24.691 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:24.691 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:24.691 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:24.691 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:24.691 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:24.691 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:24.691 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:24.691 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:24.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:24.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:28:24.691 00:28:24.691 --- 10.0.0.2 ping statistics --- 00:28:24.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.691 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:28:24.691 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:24.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:24.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:28:24.691 00:28:24.691 --- 10.0.0.1 ping statistics --- 00:28:24.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.691 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:28:24.691 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:24.691 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:28:24.691 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:24.691 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:24.691 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:24.691 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:24.691 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:24.691 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:24.691 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:24.691 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:28:24.691 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:24.691 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:24.691 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:24.691 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1117365 00:28:24.691 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1117365 00:28:24.691 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 1117365 ']' 00:28:24.691 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:24.691 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:24.691 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:24.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:24.691 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:24.691 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:24.691 14:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:28:24.691 [2024-11-06 14:12:03.937142] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:24.691 [2024-11-06 14:12:03.938137] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:28:24.691 [2024-11-06 14:12:03.938173] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:24.951 [2024-11-06 14:12:04.022652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.951 [2024-11-06 14:12:04.056562] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:24.951 [2024-11-06 14:12:04.056596] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:24.951 [2024-11-06 14:12:04.056603] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:24.951 [2024-11-06 14:12:04.056610] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:24.951 [2024-11-06 14:12:04.056616] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:24.951 [2024-11-06 14:12:04.057189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.951 [2024-11-06 14:12:04.112235] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:24.951 [2024-11-06 14:12:04.112506] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:25.521 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:25.521 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:28:25.521 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:25.521 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:25.521 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:25.521 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:25.521 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:28:25.521 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:28:25.521 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.521 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:25.521 [2024-11-06 14:12:04.741938] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:25.521 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.521 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:25.521 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.521 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:25.521 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.521 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:25.521 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.521 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:25.521 [2024-11-06 14:12:04.758102] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:25.521 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.521 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:25.521 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.521 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:25.521 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.522 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:28:25.522 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.522 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:25.522 malloc0 00:28:25.522 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.522 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:28:25.522 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.522 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:25.522 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.522 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:28:25.522 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:28:25.522 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:28:25.522 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:28:25.522 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:25.522 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:25.522 { 00:28:25.522 "params": { 00:28:25.522 "name": "Nvme$subsystem", 00:28:25.522 "trtype": "$TEST_TRANSPORT", 00:28:25.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.522 "adrfam": "ipv4", 00:28:25.522 "trsvcid": "$NVMF_PORT", 00:28:25.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.522 "hdgst": ${hdgst:-false}, 00:28:25.522 "ddgst": ${ddgst:-false} 00:28:25.522 }, 00:28:25.522 "method": "bdev_nvme_attach_controller" 00:28:25.522 } 00:28:25.522 EOF 00:28:25.522 )") 00:28:25.522 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:28:25.522 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:28:25.522 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:28:25.522 14:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:25.522 "params": { 00:28:25.522 "name": "Nvme1", 00:28:25.522 "trtype": "tcp", 00:28:25.522 "traddr": "10.0.0.2", 00:28:25.522 "adrfam": "ipv4", 00:28:25.522 "trsvcid": "4420", 00:28:25.522 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:25.522 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:25.522 "hdgst": false, 00:28:25.522 "ddgst": false 00:28:25.522 }, 00:28:25.522 "method": "bdev_nvme_attach_controller" 00:28:25.522 }' 00:28:25.782 [2024-11-06 14:12:04.823670] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:28:25.782 [2024-11-06 14:12:04.823721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1117615 ] 00:28:25.782 [2024-11-06 14:12:04.888753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.782 [2024-11-06 14:12:04.919238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:26.042 Running I/O for 10 seconds... 00:28:27.919 6744.00 IOPS, 52.69 MiB/s [2024-11-06T13:12:08.142Z] 6754.00 IOPS, 52.77 MiB/s [2024-11-06T13:12:09.521Z] 6774.67 IOPS, 52.93 MiB/s [2024-11-06T13:12:10.458Z] 7546.25 IOPS, 58.96 MiB/s [2024-11-06T13:12:11.394Z] 8005.40 IOPS, 62.54 MiB/s [2024-11-06T13:12:12.330Z] 8312.00 IOPS, 64.94 MiB/s [2024-11-06T13:12:13.268Z] 8538.57 IOPS, 66.71 MiB/s [2024-11-06T13:12:14.205Z] 8710.38 IOPS, 68.05 MiB/s [2024-11-06T13:12:15.143Z] 8842.33 IOPS, 69.08 MiB/s [2024-11-06T13:12:15.143Z] 8947.50 IOPS, 69.90 MiB/s 00:28:35.859 Latency(us) 00:28:35.859 [2024-11-06T13:12:15.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.859 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:28:35.859 Verification LBA range: start 0x0 length 0x1000 00:28:35.859 Nvme1n1 : 10.01 8951.03 69.93 0.00 0.00 14262.50 2307.41 24357.55 00:28:35.859 [2024-11-06T13:12:15.143Z] =================================================================================================================== 00:28:35.859 [2024-11-06T13:12:15.143Z] Total : 8951.03 69.93 0.00 0.00 14262.50 2307.41 24357.55 00:28:36.118 14:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1120160 00:28:36.118 14:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:28:36.118 14:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:36.118 14:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:28:36.118 14:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:28:36.118 14:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:28:36.118 14:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:28:36.118 14:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:36.118 14:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:36.118 { 00:28:36.118 "params": { 00:28:36.118 "name": "Nvme$subsystem", 00:28:36.118 "trtype": "$TEST_TRANSPORT", 00:28:36.118 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.118 "adrfam": "ipv4", 00:28:36.118 "trsvcid": "$NVMF_PORT", 00:28:36.118 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.118 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.118 "hdgst": ${hdgst:-false}, 00:28:36.118 "ddgst": ${ddgst:-false} 00:28:36.118 }, 00:28:36.118 "method": "bdev_nvme_attach_controller" 00:28:36.118 } 00:28:36.118 EOF 00:28:36.118 )") 00:28:36.118 14:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:28:36.118 [2024-11-06 14:12:15.237517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.118 [2024-11-06 14:12:15.237547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.118 14:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:28:36.118 14:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:28:36.118 14:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:36.118 "params": { 00:28:36.118 "name": "Nvme1", 00:28:36.118 "trtype": "tcp", 00:28:36.118 "traddr": "10.0.0.2", 00:28:36.118 "adrfam": "ipv4", 00:28:36.118 "trsvcid": "4420", 00:28:36.118 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:36.118 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:36.118 "hdgst": false, 00:28:36.118 "ddgst": false 00:28:36.118 }, 00:28:36.118 "method": "bdev_nvme_attach_controller" 00:28:36.118 }' 00:28:36.118 [2024-11-06 14:12:15.245483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.118 [2024-11-06 14:12:15.245493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.118 [2024-11-06 14:12:15.253481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.118 [2024-11-06 14:12:15.253490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.118 [2024-11-06 14:12:15.261481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.118 [2024-11-06 14:12:15.261490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.118 [2024-11-06 14:12:15.263275] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:28:36.118 [2024-11-06 14:12:15.263324] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1120160 ] 00:28:36.118 [2024-11-06 14:12:15.269481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.118 [2024-11-06 14:12:15.269491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.118 [2024-11-06 14:12:15.277480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.118 [2024-11-06 14:12:15.277489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.118 [2024-11-06 14:12:15.285481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.118 [2024-11-06 14:12:15.285490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.118 [2024-11-06 14:12:15.293481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.118 [2024-11-06 14:12:15.293489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.118 [2024-11-06 14:12:15.301480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.118 [2024-11-06 14:12:15.301489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.118 [2024-11-06 14:12:15.309480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.118 [2024-11-06 14:12:15.309489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.118 [2024-11-06 14:12:15.317480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.118 [2024-11-06 14:12:15.317488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.118 [2024-11-06 14:12:15.325480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.118 [2024-11-06 14:12:15.325488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.118 [2024-11-06 14:12:15.328144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.118 [2024-11-06 14:12:15.333481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.118 [2024-11-06 14:12:15.333490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.118 [2024-11-06 14:12:15.341480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.118 [2024-11-06 14:12:15.341489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.118 [2024-11-06 14:12:15.349481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.118 [2024-11-06 14:12:15.349490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.118 [2024-11-06 14:12:15.357399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:36.118 [2024-11-06 14:12:15.357481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.118 [2024-11-06 14:12:15.357488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.118 [2024-11-06 14:12:15.365480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.118 [2024-11-06 14:12:15.365493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.118 [2024-11-06 14:12:15.373486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.118 [2024-11-06 14:12:15.373499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.118 [2024-11-06 14:12:15.381484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.118 [2024-11-06 14:12:15.381494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.118 [2024-11-06 14:12:15.389482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.118 [2024-11-06 14:12:15.389492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.118 [2024-11-06 14:12:15.397482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.118 [2024-11-06 14:12:15.397493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.377 [2024-11-06 14:12:15.405482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.377 [2024-11-06 14:12:15.405491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.377 [2024-11-06 14:12:15.413481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.377 [2024-11-06 14:12:15.413490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.377 [2024-11-06 14:12:15.421481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.377 [2024-11-06 14:12:15.421489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.377 [2024-11-06 14:12:15.429522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.377 [2024-11-06 14:12:15.429538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.377 [2024-11-06 14:12:15.437484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.377 [2024-11-06 14:12:15.437494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.377 [2024-11-06 14:12:15.445483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.377 [2024-11-06 14:12:15.445493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.377 [2024-11-06 14:12:15.453484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.377 [2024-11-06 14:12:15.453495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.377 [2024-11-06 14:12:15.461481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.377 [2024-11-06 14:12:15.461489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.377 [2024-11-06 14:12:15.469480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.377 [2024-11-06 14:12:15.469489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.377 [2024-11-06 14:12:15.477480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.377 [2024-11-06 14:12:15.477488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.377 [2024-11-06 14:12:15.485481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.377 [2024-11-06 14:12:15.485489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.377 [2024-11-06 14:12:15.493481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.377 [2024-11-06 14:12:15.493490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.377 [2024-11-06 14:12:15.501482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.377 [2024-11-06 14:12:15.501493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.377 [2024-11-06 14:12:15.509482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.377 [2024-11-06 14:12:15.509492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.377 [2024-11-06 14:12:15.517481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.377 [2024-11-06 14:12:15.517495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.377 [2024-11-06 14:12:15.525488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.377 [2024-11-06 14:12:15.525503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.377 Running I/O for 5 seconds... 00:28:36.377 [2024-11-06 14:12:15.538302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.377 [2024-11-06 14:12:15.538318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.377 [2024-11-06 14:12:15.550491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.377 [2024-11-06 14:12:15.550507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.377 [2024-11-06 14:12:15.562236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.377 [2024-11-06 14:12:15.562257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.377 [2024-11-06 14:12:15.574588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.377 [2024-11-06 14:12:15.574604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.377 [2024-11-06 14:12:15.586610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.377 [2024-11-06 14:12:15.586625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.377 [2024-11-06 14:12:15.598276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.377 [2024-11-06 14:12:15.598291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.377 [2024-11-06 14:12:15.610087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.377 [2024-11-06 14:12:15.610102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.377 [2024-11-06 14:12:15.622519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.377 [2024-11-06 14:12:15.622534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.377 [2024-11-06 14:12:15.634601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.377 [2024-11-06 14:12:15.634617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.377 [2024-11-06 14:12:15.645902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.378 [2024-11-06 14:12:15.645917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.378 [2024-11-06 14:12:15.658822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.378 [2024-11-06 14:12:15.658837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.637 [2024-11-06 14:12:15.668372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.637 [2024-11-06 14:12:15.668388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.637 [2024-11-06 14:12:15.674216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.637 [2024-11-06 14:12:15.674231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.637 [2024-11-06 14:12:15.684593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.637 [2024-11-06 14:12:15.684608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.637 [2024-11-06 14:12:15.690388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.637 [2024-11-06 14:12:15.690404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.637 [2024-11-06 14:12:15.699924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.637 [2024-11-06 14:12:15.699939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.637 [2024-11-06 14:12:15.709236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.637 [2024-11-06 14:12:15.709256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.637 [2024-11-06 14:12:15.714994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.637 [2024-11-06 14:12:15.715009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.637 [2024-11-06 14:12:15.724409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.637 [2024-11-06 14:12:15.724424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.637 [2024-11-06 14:12:15.730260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.637 [2024-11-06 14:12:15.730274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.637 [2024-11-06 14:12:15.740468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.637 [2024-11-06 14:12:15.740483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.637 [2024-11-06 14:12:15.746105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.637 [2024-11-06 14:12:15.746119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.637 [2024-11-06 14:12:15.756097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.637 [2024-11-06 14:12:15.756113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.637 [2024-11-06 14:12:15.764875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.637 [2024-11-06 14:12:15.764890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.637 [2024-11-06 14:12:15.770548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.637 [2024-11-06 14:12:15.770563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.637 [2024-11-06 14:12:15.780053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.637 [2024-11-06 14:12:15.780068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.637 [2024-11-06 14:12:15.788823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.637 [2024-11-06 14:12:15.788837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.637 [2024-11-06 14:12:15.794515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.637 [2024-11-06 14:12:15.794529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.637 [2024-11-06 14:12:15.804365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.637 [2024-11-06 14:12:15.804380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.637 [2024-11-06 14:12:15.811606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.637 [2024-11-06 14:12:15.811621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.637 [2024-11-06 14:12:15.821210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.637 [2024-11-06 14:12:15.821225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.637 [2024-11-06 14:12:15.826931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.637 [2024-11-06 14:12:15.826945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.637 [2024-11-06 14:12:15.836377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.637 [2024-11-06 14:12:15.836391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.637 [2024-11-06 14:12:15.842198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.637 [2024-11-06 14:12:15.842213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.637 [2024-11-06 14:12:15.852556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.637 [2024-11-06 14:12:15.852571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.637 [2024-11-06 14:12:15.858426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.637 [2024-11-06 14:12:15.858440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.637 [2024-11-06 14:12:15.868650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.637 [2024-11-06 14:12:15.868666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.637 [2024-11-06 14:12:15.874277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.637 [2024-11-06 14:12:15.874291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.637 [2024-11-06 14:12:15.884281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.638 [2024-11-06 14:12:15.884296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.638 [2024-11-06 14:12:15.893008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.638 [2024-11-06 14:12:15.893022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.638 [2024-11-06 14:12:15.898749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.638 [2024-11-06 14:12:15.898763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.638 [2024-11-06 14:12:15.908613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.638 [2024-11-06 14:12:15.908627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.638 [2024-11-06 14:12:15.914188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.638 [2024-11-06 14:12:15.914202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.897 [2024-11-06 14:12:15.924547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.897 [2024-11-06 14:12:15.924562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.897 [2024-11-06 14:12:15.930159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.897 [2024-11-06 14:12:15.930174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.897 [2024-11-06 14:12:15.940483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.897 [2024-11-06 14:12:15.940498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.897 [2024-11-06 14:12:15.946305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.897 [2024-11-06 14:12:15.946319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.897 [2024-11-06 14:12:15.956463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.897 [2024-11-06 14:12:15.956478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.897 [2024-11-06 14:12:15.962317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.897 [2024-11-06 14:12:15.962331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.897 [2024-11-06 14:12:15.972696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.897 [2024-11-06 14:12:15.972711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.897 [2024-11-06 14:12:15.978581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.897 [2024-11-06 14:12:15.978596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.897 [2024-11-06 14:12:15.988631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.897 [2024-11-06 14:12:15.988645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.897 [2024-11-06 14:12:15.994223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.897 [2024-11-06 14:12:15.994237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.897 [2024-11-06 14:12:16.004070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.897 [2024-11-06 14:12:16.004085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.897 [2024-11-06 14:12:16.012822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.897 [2024-11-06 14:12:16.012837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.897 [2024-11-06 14:12:16.018547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.897 [2024-11-06 14:12:16.018562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.897 [2024-11-06 14:12:16.028170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.897 [2024-11-06 14:12:16.028185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.897 [2024-11-06 14:12:16.036263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.897 [2024-11-06 14:12:16.036278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.897 [2024-11-06 14:12:16.042113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.897 [2024-11-06 14:12:16.042128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.897 [2024-11-06 14:12:16.052348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.897 [2024-11-06 14:12:16.052363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.897 [2024-11-06 14:12:16.058113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.897 [2024-11-06 14:12:16.058127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.897 [2024-11-06 14:12:16.068293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.897 [2024-11-06 14:12:16.068308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.897 [2024-11-06 14:12:16.076870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.897 [2024-11-06 14:12:16.076884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.897 [2024-11-06 14:12:16.082549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.897 [2024-11-06 14:12:16.082563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.897 [2024-11-06 14:12:16.092691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.897 [2024-11-06 14:12:16.092706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.897 [2024-11-06 14:12:16.098444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.897 [2024-11-06 14:12:16.098458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.897 [2024-11-06 14:12:16.108166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.897 [2024-11-06 14:12:16.108181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.897 [2024-11-06 14:12:16.114122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.897 [2024-11-06 14:12:16.114136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.897 [2024-11-06 14:12:16.124702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.897 [2024-11-06 14:12:16.124717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.897 [2024-11-06 14:12:16.130559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.897 [2024-11-06 14:12:16.130574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.897 [2024-11-06 14:12:16.139943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.897 [2024-11-06 14:12:16.139957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.897 [2024-11-06 14:12:16.149357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.897 [2024-11-06 14:12:16.149372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.897 [2024-11-06 14:12:16.155012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.897 [2024-11-06 14:12:16.155026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.897 [2024-11-06 14:12:16.164543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.897 [2024-11-06 14:12:16.164557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.897 [2024-11-06 14:12:16.170211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.897 [2024-11-06 14:12:16.170225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:36.897 [2024-11-06 14:12:16.180440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:36.897 [2024-11-06 14:12:16.180455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.156 [2024-11-06 14:12:16.186316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.156 [2024-11-06 14:12:16.186330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.157 [2024-11-06 14:12:16.196139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.157 [2024-11-06 14:12:16.196154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.157 [2024-11-06 14:12:16.204866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.157 [2024-11-06 14:12:16.204880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.157 [2024-11-06 14:12:16.210678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.157 [2024-11-06 14:12:16.210693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.157 [2024-11-06 14:12:16.220148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.157 [2024-11-06 14:12:16.220162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.157 [2024-11-06 14:12:16.228969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.157 [2024-11-06 14:12:16.228985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.157 [2024-11-06 14:12:16.234847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.157 [2024-11-06 14:12:16.234861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.157 [2024-11-06 14:12:16.244309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.157 [2024-11-06 14:12:16.244323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.157 [2024-11-06 14:12:16.251534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.157 [2024-11-06 14:12:16.251549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.157 [2024-11-06 14:12:16.262336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.157 [2024-11-06 14:12:16.262350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.157 [2024-11-06 14:12:16.274784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.157 [2024-11-06 14:12:16.274798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.157 [2024-11-06 14:12:16.285386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.157 [2024-11-06 14:12:16.285401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.157 [2024-11-06 14:12:16.291029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.157 [2024-11-06 14:12:16.291043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.157 [2024-11-06 14:12:16.300834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.157 [2024-11-06 14:12:16.300849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.157 [2024-11-06 14:12:16.306397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.157 [2024-11-06 14:12:16.306411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.157 [2024-11-06 14:12:16.316638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.157 [2024-11-06 14:12:16.316653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.157 [2024-11-06 14:12:16.322416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.157 [2024-11-06 14:12:16.322434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.157 [2024-11-06 14:12:16.332180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.157 [2024-11-06 14:12:16.332195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.157 [2024-11-06 14:12:16.339537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.157 [2024-11-06 14:12:16.339552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.157 [2024-11-06 14:12:16.349662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.157 [2024-11-06 14:12:16.349676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.157 [2024-11-06 14:12:16.355372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.157 [2024-11-06 14:12:16.355386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.157 [2024-11-06 14:12:16.364134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.157 [2024-11-06 14:12:16.364148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.157 [2024-11-06 14:12:16.370073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.157 [2024-11-06 14:12:16.370087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.157 [2024-11-06 14:12:16.380105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.157 [2024-11-06 14:12:16.380120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.157 [2024-11-06 14:12:16.389391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.157 [2024-11-06 14:12:16.389406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.157 [2024-11-06 14:12:16.395171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.157 [2024-11-06 14:12:16.395186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.157 [2024-11-06 14:12:16.404516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.157 [2024-11-06 14:12:16.404531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.157 [2024-11-06 14:12:16.410269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.157 [2024-11-06 14:12:16.410284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.157 [2024-11-06 14:12:16.420170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.157 [2024-11-06 14:12:16.420185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.157 [2024-11-06 14:12:16.426054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.157 [2024-11-06 14:12:16.426069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.157 [2024-11-06 14:12:16.437104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.157 [2024-11-06 14:12:16.437119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.416 [2024-11-06 14:12:16.443189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.416 [2024-11-06 14:12:16.443204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.416 [2024-11-06 14:12:16.452792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.416 [2024-11-06 14:12:16.452807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.416 [2024-11-06 14:12:16.458562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.416 [2024-11-06 14:12:16.458576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.416 [2024-11-06 14:12:16.468160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.416 [2024-11-06 14:12:16.468175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.416 [2024-11-06 14:12:16.475546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.416 [2024-11-06 14:12:16.475563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.416 [2024-11-06 14:12:16.486275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.416 [2024-11-06 14:12:16.486289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.416 [2024-11-06 14:12:16.498317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.416 [2024-11-06 14:12:16.498332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.416 [2024-11-06 14:12:16.510339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.416 [2024-11-06 14:12:16.510354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.416 [2024-11-06 14:12:16.522735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.416 [2024-11-06 14:12:16.522750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.416 [2024-11-06 14:12:16.533764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.416 [2024-11-06 14:12:16.533779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.416 19347.00 IOPS, 151.15 MiB/s [2024-11-06T13:12:16.700Z] [2024-11-06 14:12:16.546541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.417 [2024-11-06 14:12:16.546555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.417 [2024-11-06 14:12:16.557639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.417 [2024-11-06 14:12:16.557654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.417 [2024-11-06 14:12:16.563719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.417 [2024-11-06 14:12:16.563734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.417 [2024-11-06 14:12:16.571686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.417 [2024-11-06 14:12:16.571700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.417 [2024-11-06 14:12:16.581264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.417 [2024-11-06 14:12:16.581279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.417 [2024-11-06 14:12:16.586953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.417 [2024-11-06 14:12:16.586968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.417 [2024-11-06 14:12:16.596557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.417 [2024-11-06 14:12:16.596571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.417 [2024-11-06 14:12:16.602127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.417 [2024-11-06 14:12:16.602140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.417 [2024-11-06 14:12:16.612407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.417 [2024-11-06 14:12:16.612421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.417 [2024-11-06 14:12:16.618198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.417 [2024-11-06 14:12:16.618212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.417 [2024-11-06 14:12:16.628419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.417 [2024-11-06 14:12:16.628435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.417 [2024-11-06 14:12:16.634161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.417 [2024-11-06 14:12:16.634175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.417 [2024-11-06 14:12:16.644405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.417 [2024-11-06 14:12:16.644420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.417 [2024-11-06 14:12:16.650184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.417 [2024-11-06 14:12:16.650202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.417 [2024-11-06 14:12:16.659831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.417 [2024-11-06 14:12:16.659845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.417 [2024-11-06 14:12:16.669435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.417 [2024-11-06 14:12:16.669450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.417 [2024-11-06 14:12:16.675081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.417 [2024-11-06 14:12:16.675096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.417 [2024-11-06 14:12:16.684317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.417 [2024-11-06 14:12:16.684331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.417 [2024-11-06 14:12:16.690196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.417 [2024-11-06 14:12:16.690210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.676 [2024-11-06 14:12:16.700620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.676 [2024-11-06 14:12:16.700634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.676 [2024-11-06 14:12:16.706410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.676 [2024-11-06 14:12:16.706424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.676 [2024-11-06 14:12:16.716155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.676 [2024-11-06 14:12:16.716169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.676 [2024-11-06 14:12:16.724941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.676 [2024-11-06 14:12:16.724956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.676 [2024-11-06 14:12:16.730553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.676 [2024-11-06 14:12:16.730568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.676 [2024-11-06 14:12:16.740116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.676 [2024-11-06 14:12:16.740130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.676 [2024-11-06 14:12:16.748827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.676 [2024-11-06 14:12:16.748842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.676 [2024-11-06 14:12:16.754749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.676 [2024-11-06 14:12:16.754763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.676 [2024-11-06 14:12:16.764191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.676 [2024-11-06 14:12:16.764207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.676 [2024-11-06 14:12:16.773032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.676 [2024-11-06 14:12:16.773047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.676 [2024-11-06 14:12:16.778741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.676 [2024-11-06 14:12:16.778756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.676 [2024-11-06 14:12:16.788407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.676 [2024-11-06 14:12:16.788422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.676 [2024-11-06 14:12:16.796964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.677 [2024-11-06 14:12:16.796979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.677 [2024-11-06 14:12:16.802607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.677 [2024-11-06 14:12:16.802622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.677 [2024-11-06 14:12:16.813089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.677 [2024-11-06 14:12:16.813104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.677 [2024-11-06 14:12:16.818997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.677 [2024-11-06 14:12:16.819012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.677 [2024-11-06 14:12:16.828014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.677 [2024-11-06 14:12:16.828029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.677 [2024-11-06 14:12:16.836788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.677 [2024-11-06 14:12:16.836803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.677 [2024-11-06 14:12:16.842478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.677 [2024-11-06 14:12:16.842493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.677 [2024-11-06 14:12:16.851933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.677 [2024-11-06 14:12:16.851948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.677 [2024-11-06 14:12:16.861157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.677 [2024-11-06 14:12:16.861172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.677 [2024-11-06 14:12:16.866763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.677 [2024-11-06 14:12:16.866777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.677 [2024-11-06 14:12:16.876241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.677 [2024-11-06 14:12:16.876261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.677 [2024-11-06 14:12:16.884932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.677 [2024-11-06 14:12:16.884947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.677 [2024-11-06 14:12:16.890897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.677 [2024-11-06 14:12:16.890912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.677 [2024-11-06 14:12:16.899754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.677 [2024-11-06 14:12:16.899770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.677 [2024-11-06 14:12:16.909185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.677 [2024-11-06 14:12:16.909201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.677 [2024-11-06 14:12:16.914925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.677 [2024-11-06 14:12:16.914940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.677 [2024-11-06 14:12:16.924191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.677 [2024-11-06 14:12:16.924206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.677 [2024-11-06 14:12:16.932220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.677 [2024-11-06 14:12:16.932235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.677 [2024-11-06 14:12:16.940761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.677 [2024-11-06 14:12:16.940776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.677 [2024-11-06 14:12:16.946405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.677 [2024-11-06 14:12:16.946419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.677 [2024-11-06 14:12:16.956751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.677 [2024-11-06 14:12:16.956766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.936 [2024-11-06 14:12:16.962391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.936 [2024-11-06 14:12:16.962406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.936 [2024-11-06 14:12:16.972492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.936 [2024-11-06 14:12:16.972507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.936 [2024-11-06 14:12:16.978220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.936 [2024-11-06 14:12:16.978234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.936 [2024-11-06 14:12:16.988456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.936 [2024-11-06 14:12:16.988471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.936 [2024-11-06 14:12:16.994169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.936 [2024-11-06 14:12:16.994184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.936 [2024-11-06 14:12:17.004443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.936 [2024-11-06 14:12:17.004459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.936 [2024-11-06 14:12:17.010203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.936 [2024-11-06 14:12:17.010217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.936 [2024-11-06 14:12:17.020532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.936 [2024-11-06 14:12:17.020547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.936 [2024-11-06 14:12:17.026448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.936 [2024-11-06 14:12:17.026462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.936 [2024-11-06 14:12:17.036043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.936 [2024-11-06 14:12:17.036058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.936 [2024-11-06 14:12:17.044830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.936 [2024-11-06 14:12:17.044844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.936 [2024-11-06 14:12:17.050362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.936 [2024-11-06 14:12:17.050377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.937 [2024-11-06 14:12:17.060666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.937 [2024-11-06 14:12:17.060681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.937 [2024-11-06 14:12:17.066225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.937 [2024-11-06 14:12:17.066239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.937 [2024-11-06 14:12:17.076596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.937 [2024-11-06 14:12:17.076612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.937 [2024-11-06 14:12:17.082436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.937 [2024-11-06 14:12:17.082450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.937 [2024-11-06 14:12:17.091707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.937 [2024-11-06 14:12:17.091722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.937 [2024-11-06 14:12:17.101335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.937 [2024-11-06 14:12:17.101350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.937 [2024-11-06 14:12:17.107029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.937 [2024-11-06 14:12:17.107044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.937 [2024-11-06 14:12:17.116178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.937 [2024-11-06 14:12:17.116193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.937 [2024-11-06 14:12:17.122132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.937 [2024-11-06 14:12:17.122146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.937 [2024-11-06 14:12:17.132586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.937 [2024-11-06 14:12:17.132602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.937 [2024-11-06 14:12:17.138154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.937 [2024-11-06 14:12:17.138169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.937 [2024-11-06 14:12:17.148376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.937 [2024-11-06 14:12:17.148391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.937 [2024-11-06 14:12:17.156816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.937 [2024-11-06 14:12:17.156831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.937 [2024-11-06 14:12:17.162531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.937 [2024-11-06 14:12:17.162546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.937 [2024-11-06 14:12:17.172592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.937 [2024-11-06 14:12:17.172607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.937 [2024-11-06 14:12:17.178241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.937 [2024-11-06 14:12:17.178259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.937 [2024-11-06 14:12:17.188413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.937 [2024-11-06 14:12:17.188428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.937 [2024-11-06 14:12:17.194077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.937 [2024-11-06 14:12:17.194091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.937 [2024-11-06 14:12:17.204230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.937 [2024-11-06 14:12:17.204249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:37.937 [2024-11-06 14:12:17.212341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:37.937 [2024-11-06 14:12:17.212356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.196 [2024-11-06 14:12:17.220313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.196 [2024-11-06 14:12:17.220328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.196 [2024-11-06 14:12:17.226538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.197 [2024-11-06 14:12:17.226552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.197 [2024-11-06 14:12:17.236275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.197 [2024-11-06 14:12:17.236291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.197 [2024-11-06 14:12:17.242095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.197 [2024-11-06 14:12:17.242109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.197 [2024-11-06 14:12:17.252481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.197 [2024-11-06 14:12:17.252496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.197 [2024-11-06 14:12:17.258440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.197 [2024-11-06 14:12:17.258455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.197 [2024-11-06 14:12:17.268585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.197 [2024-11-06 14:12:17.268600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.197 [2024-11-06 14:12:17.274384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.197 [2024-11-06 14:12:17.274399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.197 [2024-11-06 14:12:17.284151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.197 [2024-11-06 14:12:17.284166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.197 [2024-11-06 14:12:17.292948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.197 [2024-11-06 14:12:17.292963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.197 [2024-11-06 14:12:17.298885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.197 [2024-11-06 14:12:17.298900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.197 [2024-11-06 14:12:17.309056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.197 [2024-11-06 14:12:17.309071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.197 [2024-11-06 14:12:17.314722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.197 [2024-11-06 14:12:17.314737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.197 [2024-11-06 14:12:17.324135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.197 [2024-11-06 14:12:17.324150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.197 [2024-11-06 14:12:17.331551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.197 [2024-11-06 14:12:17.331566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.197 [2024-11-06 14:12:17.342448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.197 [2024-11-06 14:12:17.342462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.197 [2024-11-06 14:12:17.354594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.197 [2024-11-06 14:12:17.354609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.197 [2024-11-06 14:12:17.365923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.197 [2024-11-06 14:12:17.365938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.197 [2024-11-06 14:12:17.378435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.197 [2024-11-06 14:12:17.378450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.197 [2024-11-06 14:12:17.390479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.197 [2024-11-06 14:12:17.390493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.197 [2024-11-06 14:12:17.402470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.197 [2024-11-06 14:12:17.402485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.197 [2024-11-06 14:12:17.414324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.197 [2024-11-06 14:12:17.414339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.197 [2024-11-06 14:12:17.426761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.197 [2024-11-06 14:12:17.426775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.197 [2024-11-06 14:12:17.437709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.197 [2024-11-06 14:12:17.437727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.197 [2024-11-06 14:12:17.443595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.197 [2024-11-06 14:12:17.443610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.197 [2024-11-06 14:12:17.452115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.197 [2024-11-06 14:12:17.452129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.197 [2024-11-06 14:12:17.461448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.197 [2024-11-06 14:12:17.461463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.197 [2024-11-06 14:12:17.467408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.197 [2024-11-06 14:12:17.467422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.197 [2024-11-06 14:12:17.476105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.197 [2024-11-06 14:12:17.476119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.456 [2024-11-06 14:12:17.485289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.456 [2024-11-06 14:12:17.485304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.456 [2024-11-06 14:12:17.490865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.456 [2024-11-06 14:12:17.490879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.456 [2024-11-06 14:12:17.500348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.456 [2024-11-06 14:12:17.500363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.456 [2024-11-06 14:12:17.508923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.456 [2024-11-06 14:12:17.508938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.456 [2024-11-06 14:12:17.514755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.456 [2024-11-06 14:12:17.514770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.456 [2024-11-06 14:12:17.524129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.456 [2024-11-06 14:12:17.524144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.456 [2024-11-06 14:12:17.532903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.456 [2024-11-06 14:12:17.532918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.456 19400.00 IOPS, 151.56 MiB/s [2024-11-06T13:12:17.740Z] [2024-11-06 14:12:17.538536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.456 [2024-11-06 14:12:17.538551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.456 [2024-11-06 14:12:17.548677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.456 [2024-11-06 14:12:17.548692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.456 [2024-11-06 14:12:17.554390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.456 [2024-11-06 14:12:17.554405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.456 [2024-11-06 14:12:17.564118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.456 [2024-11-06 14:12:17.564133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.456 [2024-11-06 14:12:17.572748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.456 [2024-11-06 14:12:17.572762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.456 [2024-11-06 14:12:17.578366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.456 [2024-11-06 14:12:17.578380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.456 [2024-11-06 14:12:17.588733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.456 [2024-11-06 14:12:17.588751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.456 [2024-11-06 14:12:17.594546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.456 [2024-11-06 14:12:17.594561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.456 [2024-11-06 14:12:17.604446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.456 [2024-11-06 14:12:17.604461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.456 [2024-11-06 14:12:17.610222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.456 [2024-11-06 14:12:17.610238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.456 [2024-11-06 14:12:17.620653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.456 [2024-11-06 14:12:17.620668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.456 [2024-11-06 14:12:17.626543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.456 [2024-11-06 14:12:17.626558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.456 [2024-11-06 14:12:17.636662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.456 [2024-11-06 14:12:17.636677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.456 [2024-11-06 14:12:17.642316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.456 [2024-11-06 14:12:17.642329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.456 [2024-11-06 14:12:17.651871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.456 [2024-11-06 14:12:17.651885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.457 [2024-11-06 14:12:17.661022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.457 [2024-11-06 14:12:17.661037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.457 [2024-11-06 14:12:17.666596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.457 [2024-11-06 14:12:17.666610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.457 [2024-11-06 14:12:17.676030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.457 [2024-11-06 14:12:17.676044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.457 [2024-11-06 14:12:17.684809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.457 [2024-11-06 14:12:17.684824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.457 [2024-11-06 14:12:17.690632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.457 [2024-11-06 14:12:17.690646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.457 [2024-11-06 14:12:17.700112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.457 [2024-11-06 14:12:17.700126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.457 [2024-11-06 14:12:17.709469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.457 [2024-11-06 14:12:17.709483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.457 [2024-11-06 14:12:17.715231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.457 [2024-11-06 14:12:17.715251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.457 [2024-11-06 14:12:17.724098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.457 [2024-11-06 14:12:17.724113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.457 [2024-11-06 14:12:17.732713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.457 [2024-11-06 14:12:17.732727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.457 [2024-11-06 14:12:17.738400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.457 [2024-11-06 14:12:17.738419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.716 [2024-11-06 14:12:17.748718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.716 [2024-11-06 14:12:17.748733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.717 [2024-11-06 14:12:17.754353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.717 [2024-11-06 14:12:17.754367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.717 [2024-11-06 14:12:17.764310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.717 [2024-11-06 14:12:17.764325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.717 [2024-11-06 14:12:17.772837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.717 [2024-11-06 14:12:17.772851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.717 [2024-11-06 14:12:17.778606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.717 [2024-11-06 14:12:17.778620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.717 [2024-11-06 14:12:17.788420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.717 [2024-11-06 14:12:17.788434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.717 [2024-11-06 14:12:17.794088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.717 [2024-11-06 14:12:17.794102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.717 [2024-11-06 14:12:17.804373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.717 [2024-11-06 14:12:17.804388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.717 [2024-11-06 14:12:17.810036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.717 [2024-11-06 14:12:17.810051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.717 [2024-11-06 14:12:17.820170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.717 [2024-11-06 14:12:17.820185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.717 [2024-11-06 14:12:17.827541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.717 [2024-11-06 14:12:17.827556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.717 [2024-11-06 14:12:17.837106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.717 [2024-11-06 14:12:17.837121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.717 [2024-11-06 14:12:17.843048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.717 [2024-11-06 14:12:17.843063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.717 [2024-11-06 14:12:17.852021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.717 [2024-11-06 14:12:17.852036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.717 [2024-11-06 14:12:17.860763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.717 [2024-11-06 14:12:17.860777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.717 [2024-11-06 14:12:17.866613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.717 [2024-11-06 14:12:17.866628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.717 [2024-11-06 14:12:17.876426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.717 [2024-11-06 14:12:17.876441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.717 [2024-11-06 14:12:17.882057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.717 [2024-11-06 14:12:17.882071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.717 [2024-11-06 14:12:17.892507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.717 [2024-11-06 14:12:17.892521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.717 [2024-11-06 14:12:17.898216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.717 [2024-11-06 14:12:17.898230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.717 [2024-11-06 14:12:17.907922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.717 [2024-11-06 14:12:17.907936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.717 [2024-11-06 14:12:17.917196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.717 [2024-11-06 14:12:17.917211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.717 [2024-11-06 14:12:17.922931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.717 [2024-11-06 14:12:17.922945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.717 [2024-11-06 14:12:17.932335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.717 [2024-11-06 14:12:17.932350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.717 [2024-11-06 14:12:17.938080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.717 [2024-11-06 14:12:17.938095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.717 [2024-11-06 14:12:17.948102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.717 [2024-11-06 14:12:17.948117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.717 [2024-11-06 14:12:17.956748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.717 [2024-11-06 14:12:17.956762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.717 [2024-11-06 14:12:17.962509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.717 [2024-11-06 14:12:17.962524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.717 [2024-11-06 14:12:17.972130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.717 [2024-11-06 14:12:17.972144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.717 [2024-11-06 14:12:17.980337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.717 [2024-11-06 14:12:17.980352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.717 [2024-11-06 14:12:17.986073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.717 [2024-11-06 14:12:17.986087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.717 [2024-11-06 14:12:17.996124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.717 [2024-11-06 14:12:17.996140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.977 [2024-11-06 14:12:18.004912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.977 [2024-11-06 14:12:18.004927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.977 [2024-11-06 14:12:18.010417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.977 [2024-11-06 14:12:18.010431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.977 [2024-11-06 14:12:18.019830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.977 [2024-11-06 14:12:18.019844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.977 [2024-11-06 14:12:18.029185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.977 [2024-11-06 14:12:18.029199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.977 [2024-11-06 14:12:18.034868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.977 [2024-11-06 14:12:18.034883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.977 [2024-11-06 14:12:18.044276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.977 [2024-11-06 14:12:18.044290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.977 [2024-11-06 14:12:18.052892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.977 [2024-11-06 14:12:18.052906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.977 [2024-11-06 14:12:18.058785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.977 [2024-11-06 14:12:18.058799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.977 [2024-11-06 14:12:18.068483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.977 [2024-11-06 14:12:18.068498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.977 [2024-11-06 14:12:18.074109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.977 [2024-11-06 14:12:18.074123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.977 [2024-11-06 14:12:18.084570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.977 [2024-11-06 14:12:18.084585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.977 [2024-11-06 14:12:18.090188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.977 [2024-11-06 14:12:18.090202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.977 [2024-11-06 14:12:18.100144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.977 [2024-11-06 14:12:18.100159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.977 [2024-11-06 14:12:18.108854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.977 [2024-11-06 14:12:18.108868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.977 [2024-11-06 14:12:18.114400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.977 [2024-11-06 14:12:18.114414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.977 [2024-11-06 14:12:18.124515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.977 [2024-11-06 14:12:18.124529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.977 [2024-11-06 14:12:18.130233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.977 [2024-11-06 14:12:18.130252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.977 [2024-11-06 14:12:18.140686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.977 [2024-11-06 14:12:18.140701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.977 [2024-11-06 14:12:18.146521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.977 [2024-11-06 14:12:18.146535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.977 [2024-11-06 14:12:18.155982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.977 [2024-11-06 14:12:18.155997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.977 [2024-11-06 14:12:18.165405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.977 [2024-11-06 14:12:18.165419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.977 [2024-11-06 14:12:18.171204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.977 [2024-11-06 14:12:18.171219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.977 [2024-11-06 14:12:18.180865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.977 [2024-11-06 14:12:18.180879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.977 [2024-11-06 14:12:18.186755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.977 [2024-11-06 14:12:18.186769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.977 [2024-11-06 14:12:18.196187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.977 [2024-11-06 14:12:18.196202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.977 [2024-11-06 14:12:18.205421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.977 [2024-11-06 14:12:18.205437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.977 [2024-11-06 14:12:18.211143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.977 [2024-11-06 14:12:18.211158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.977 [2024-11-06 14:12:18.220583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.977 [2024-11-06 14:12:18.220597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.977 [2024-11-06 14:12:18.226437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.977 [2024-11-06 14:12:18.226452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.977 [2024-11-06 14:12:18.236786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.977 [2024-11-06 14:12:18.236801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.977 [2024-11-06 14:12:18.242439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.977 [2024-11-06 14:12:18.242454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.977 [2024-11-06 14:12:18.252910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.977 [2024-11-06 14:12:18.252925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:38.977 [2024-11-06 14:12:18.258517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:38.977 [2024-11-06 14:12:18.258532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.237 [2024-11-06 14:12:18.268702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.237 [2024-11-06 14:12:18.268717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.237 [2024-11-06 14:12:18.274448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.237 [2024-11-06 14:12:18.274462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.237 [2024-11-06 14:12:18.283992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.237 [2024-11-06 14:12:18.284007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.237 [2024-11-06 14:12:18.292704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.237 [2024-11-06 14:12:18.292719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.237 [2024-11-06 14:12:18.298485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.237 [2024-11-06 14:12:18.298500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.237 [2024-11-06 14:12:18.308405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.237 [2024-11-06 14:12:18.308420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.237 [2024-11-06 14:12:18.314107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.237 [2024-11-06 14:12:18.314121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.237 [2024-11-06 14:12:18.324680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.237 [2024-11-06 14:12:18.324695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.237 [2024-11-06 14:12:18.330287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.237 [2024-11-06 14:12:18.330301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.237 [2024-11-06 14:12:18.340715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.237 [2024-11-06 14:12:18.340737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.237 [2024-11-06 14:12:18.346339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.237 [2024-11-06 14:12:18.346353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.237 [2024-11-06 14:12:18.356604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.237 [2024-11-06 14:12:18.356619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.237 [2024-11-06 14:12:18.362297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.237 [2024-11-06 14:12:18.362312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.237 [2024-11-06 14:12:18.372471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.237 [2024-11-06 14:12:18.372486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.237 [2024-11-06 14:12:18.378054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.237 [2024-11-06 14:12:18.378068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.237 [2024-11-06 14:12:18.388462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.237 [2024-11-06 14:12:18.388477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.237 [2024-11-06 14:12:18.394314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.237 [2024-11-06 14:12:18.394329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.237 [2024-11-06 14:12:18.404285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.237 [2024-11-06 14:12:18.404300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.237 [2024-11-06 14:12:18.412758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.237 [2024-11-06 14:12:18.412774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.237 [2024-11-06 14:12:18.418422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.237 [2024-11-06 14:12:18.418436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.237 [2024-11-06 14:12:18.428262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.237 [2024-11-06 14:12:18.428277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.237 [2024-11-06 14:12:18.434254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.237 [2024-11-06 14:12:18.434269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.237 [2024-11-06 14:12:18.444656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.237 [2024-11-06 14:12:18.444671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.237 [2024-11-06 14:12:18.450357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.237 [2024-11-06 14:12:18.450372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.237 [2024-11-06 14:12:18.459991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.237 [2024-11-06 14:12:18.460006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.237 [2024-11-06 14:12:18.468714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.237 [2024-11-06 14:12:18.468729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.237 [2024-11-06 14:12:18.474618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.237 [2024-11-06 14:12:18.474633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.237 [2024-11-06 14:12:18.484347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.237 [2024-11-06 14:12:18.484362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.237 [2024-11-06 14:12:18.492281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.237 [2024-11-06 14:12:18.492299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.237 [2024-11-06 14:12:18.500463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.237 [2024-11-06 14:12:18.500478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.237 [2024-11-06 14:12:18.506064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.237 [2024-11-06 14:12:18.506078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.237 [2024-11-06 14:12:18.516044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.237 [2024-11-06 14:12:18.516060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.497 [2024-11-06 14:12:18.525480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.497 [2024-11-06 14:12:18.525495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.497 [2024-11-06 14:12:18.531312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.497 [2024-11-06 14:12:18.531326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.497 [2024-11-06 14:12:18.539834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.497 [2024-11-06 14:12:18.539850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.497 19431.33 IOPS, 151.81 MiB/s [2024-11-06T13:12:18.781Z] [2024-11-06 14:12:18.549385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.497 [2024-11-06 14:12:18.549400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.497 [2024-11-06 14:12:18.555177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.497 [2024-11-06 14:12:18.555192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.497 [2024-11-06 14:12:18.564439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.497 [2024-11-06 14:12:18.564455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.497 [2024-11-06 14:12:18.570072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.497 [2024-11-06 14:12:18.570086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.497 [2024-11-06 14:12:18.580380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.497 [2024-11-06 14:12:18.580395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.497 [2024-11-06 14:12:18.588935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.497 [2024-11-06 14:12:18.588951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.497 [2024-11-06 14:12:18.594834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.497 [2024-11-06 14:12:18.594849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.497 [2024-11-06 14:12:18.604516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.497 [2024-11-06 14:12:18.604531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.497 [2024-11-06 14:12:18.610250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.497 [2024-11-06 14:12:18.610264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.497 [2024-11-06 14:12:18.620652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.497 [2024-11-06 14:12:18.620667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.497 [2024-11-06 14:12:18.626531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.497 [2024-11-06 14:12:18.626546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.497 [2024-11-06 14:12:18.635882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.497 [2024-11-06 14:12:18.635898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.497 [2024-11-06 14:12:18.645512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.497 [2024-11-06 14:12:18.645531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.497 [2024-11-06 14:12:18.651268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.497 [2024-11-06 14:12:18.651282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.497 [2024-11-06 14:12:18.660081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.497 [2024-11-06 14:12:18.660096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.497 [2024-11-06 14:12:18.668768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.497 [2024-11-06 14:12:18.668783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.497 [2024-11-06 14:12:18.674393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.497 [2024-11-06 14:12:18.674408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.497 [2024-11-06 14:12:18.684088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.497 [2024-11-06 14:12:18.684103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.497 [2024-11-06 14:12:18.692613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.497 [2024-11-06 14:12:18.692628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.497 [2024-11-06 14:12:18.698436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.497 [2024-11-06 14:12:18.698451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.497 [2024-11-06 14:12:18.708436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.497 [2024-11-06 14:12:18.708451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.497 [2024-11-06 14:12:18.714064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.497 [2024-11-06 14:12:18.714078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.497 [2024-11-06 14:12:18.724774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.497 [2024-11-06 14:12:18.724788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.497 [2024-11-06 14:12:18.730511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.497 [2024-11-06 14:12:18.730525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.497 [2024-11-06 14:12:18.739955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.497 [2024-11-06 14:12:18.739970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.497 [2024-11-06 14:12:18.749276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.497 [2024-11-06 14:12:18.749291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.497 [2024-11-06 14:12:18.755052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.497 [2024-11-06 14:12:18.755067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.497 [2024-11-06 14:12:18.763830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.497 [2024-11-06 14:12:18.763845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.497 [2024-11-06 14:12:18.773175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.497 [2024-11-06 14:12:18.773190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.497 [2024-11-06 14:12:18.778812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.498 [2024-11-06 14:12:18.778827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.758 [2024-11-06 14:12:18.788785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.758 [2024-11-06 14:12:18.788800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.758 [2024-11-06 14:12:18.794622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.758 [2024-11-06 14:12:18.794640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.758 [2024-11-06 14:12:18.804097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.758 [2024-11-06 14:12:18.804113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.758 [2024-11-06 14:12:18.812836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.758 [2024-11-06 14:12:18.812851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.758 [2024-11-06 14:12:18.818491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.758 [2024-11-06 14:12:18.818506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.758 [2024-11-06 14:12:18.828616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.758 [2024-11-06 14:12:18.828632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.758 [2024-11-06 14:12:18.834308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.758 [2024-11-06 14:12:18.834323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.758 [2024-11-06 14:12:18.844419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.758 [2024-11-06 14:12:18.844435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.758 [2024-11-06 14:12:18.851501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.758 [2024-11-06 14:12:18.851515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.758 [2024-11-06 14:12:18.862404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.758 [2024-11-06 14:12:18.862419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.758 [2024-11-06 14:12:18.873914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.758 [2024-11-06 14:12:18.873928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.758 [2024-11-06 14:12:18.886160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.758 [2024-11-06 14:12:18.886175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.758 [2024-11-06 14:12:18.898067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.758 [2024-11-06 14:12:18.898082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.758 [2024-11-06 14:12:18.910816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.758 [2024-11-06 14:12:18.910830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.758 [2024-11-06 14:12:18.921554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.758 [2024-11-06 14:12:18.921569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.758 [2024-11-06 14:12:18.927113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.758 [2024-11-06 14:12:18.927127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.758 [2024-11-06 14:12:18.936818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.758 [2024-11-06 14:12:18.936833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.758 [2024-11-06 14:12:18.942565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.758 [2024-11-06 14:12:18.942579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.758 [2024-11-06 14:12:18.952076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.758 [2024-11-06 14:12:18.952091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.758 [2024-11-06 14:12:18.960858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.758 [2024-11-06 14:12:18.960873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.758 [2024-11-06 14:12:18.966421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.758 [2024-11-06 14:12:18.966436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.758 [2024-11-06 14:12:18.976080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.758 [2024-11-06 14:12:18.976095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.758 [2024-11-06 14:12:18.982086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.758 [2024-11-06 14:12:18.982100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.758 [2024-11-06 14:12:18.992467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.758 [2024-11-06 14:12:18.992482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.758 [2024-11-06 14:12:18.998148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.758 [2024-11-06 14:12:18.998162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.758 [2024-11-06 14:12:19.008371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.758 [2024-11-06 14:12:19.008386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.758 [2024-11-06 14:12:19.014169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.758 [2024-11-06 14:12:19.014183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.758 [2024-11-06 14:12:19.024520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.758 [2024-11-06 14:12:19.024534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.758 [2024-11-06 14:12:19.030396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.758 [2024-11-06 14:12:19.030410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:39.758 [2024-11-06 14:12:19.040717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:39.758 [2024-11-06 14:12:19.040732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.018 [2024-11-06 14:12:19.046355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.018 [2024-11-06 14:12:19.046369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.018 [2024-11-06 14:12:19.056007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.018 [2024-11-06 14:12:19.056022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.018 [2024-11-06 14:12:19.064752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.018 [2024-11-06 14:12:19.064767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.018 [2024-11-06 14:12:19.070518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.018 [2024-11-06 14:12:19.070533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.018 [2024-11-06 14:12:19.079958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.018 [2024-11-06 14:12:19.079972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.019 [2024-11-06 14:12:19.088684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.019 [2024-11-06 14:12:19.088699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.019 [2024-11-06 14:12:19.094454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.019 [2024-11-06 14:12:19.094468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.019 [2024-11-06 14:12:19.103781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.019 [2024-11-06 14:12:19.103795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.019 [2024-11-06 14:12:19.113260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.019 [2024-11-06 14:12:19.113274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.019 [2024-11-06 14:12:19.119016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.019 [2024-11-06 14:12:19.119030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.019 [2024-11-06 14:12:19.127566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.019 [2024-11-06 14:12:19.127581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.019 [2024-11-06 14:12:19.136955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.019 [2024-11-06 14:12:19.136969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.019 [2024-11-06 14:12:19.142614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.019 [2024-11-06 14:12:19.142628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.019 [2024-11-06 14:12:19.152120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.019 [2024-11-06 14:12:19.152135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.019 [2024-11-06 14:12:19.158053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.019 [2024-11-06 14:12:19.158067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.019 [2024-11-06 14:12:19.167995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.019 [2024-11-06 14:12:19.168009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.019 [2024-11-06 14:12:19.177293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.019 [2024-11-06 14:12:19.177308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.019 [2024-11-06 14:12:19.182945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.019 [2024-11-06 14:12:19.182959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.019 [2024-11-06 14:12:19.192457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.019 [2024-11-06 14:12:19.192472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.019 [2024-11-06 14:12:19.198119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.019 [2024-11-06 14:12:19.198133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.019 [2024-11-06 14:12:19.208313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.019 [2024-11-06 14:12:19.208328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.019 [2024-11-06 14:12:19.216269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.019 [2024-11-06 14:12:19.216283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.019 [2024-11-06 14:12:19.223558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.019 [2024-11-06 14:12:19.223572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.019 [2024-11-06 14:12:19.234316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.019 [2024-11-06 14:12:19.234330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.019 [2024-11-06 14:12:19.246606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.019 [2024-11-06 14:12:19.246621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.019 [2024-11-06 14:12:19.258614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.019 [2024-11-06 14:12:19.258629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.019 [2024-11-06 14:12:19.269519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.019 [2024-11-06 14:12:19.269533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.019 [2024-11-06 14:12:19.275375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.019 [2024-11-06 14:12:19.275391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.019 [2024-11-06 14:12:19.284153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.019 [2024-11-06 14:12:19.284169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.019 [2024-11-06 14:12:19.292790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.019 [2024-11-06 14:12:19.292804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.279 [2024-11-06 14:12:19.305373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.279 [2024-11-06 14:12:19.305388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.279 [2024-11-06 14:12:19.318043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.279 [2024-11-06 14:12:19.318057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.279 [2024-11-06 14:12:19.330523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.279 [2024-11-06 14:12:19.330538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.279 [2024-11-06 14:12:19.340678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.279 [2024-11-06 14:12:19.340693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.279 [2024-11-06 14:12:19.346377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.279 [2024-11-06 14:12:19.346391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.279 [2024-11-06 14:12:19.356483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.279 [2024-11-06 14:12:19.356498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.279 [2024-11-06 14:12:19.362104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.279 [2024-11-06 14:12:19.362119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.279 [2024-11-06 14:12:19.372377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.279 [2024-11-06 14:12:19.372392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.279 [2024-11-06 14:12:19.379561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.279 [2024-11-06 14:12:19.379576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.279 [2024-11-06 14:12:19.389081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.279 [2024-11-06 14:12:19.389097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.279 [2024-11-06 14:12:19.394715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.279 [2024-11-06 14:12:19.394730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.279 [2024-11-06 14:12:19.404256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.279 [2024-11-06 14:12:19.404270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.279 [2024-11-06 14:12:19.412879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.279 [2024-11-06 14:12:19.412894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.279 [2024-11-06 14:12:19.418646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.279 [2024-11-06 14:12:19.418660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.279 [2024-11-06 14:12:19.428270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.279 [2024-11-06 14:12:19.428287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.279 [2024-11-06 14:12:19.436879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.279 [2024-11-06 14:12:19.436893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.279 [2024-11-06 14:12:19.442717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.279 [2024-11-06 14:12:19.442736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.279 [2024-11-06 14:12:19.452033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.279 [2024-11-06 14:12:19.452047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.279 [2024-11-06 14:12:19.460820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.279 [2024-11-06 14:12:19.460835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.279 [2024-11-06 14:12:19.466448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.279 [2024-11-06 14:12:19.466462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.279 [2024-11-06 14:12:19.476135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.279 [2024-11-06 14:12:19.476150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.279 [2024-11-06 14:12:19.483614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.279 [2024-11-06 14:12:19.483628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.279 [2024-11-06 14:12:19.493019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.279 [2024-11-06 14:12:19.493033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.279 [2024-11-06 14:12:19.498738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.279 [2024-11-06 14:12:19.498752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.279 [2024-11-06 14:12:19.508500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.279 [2024-11-06 14:12:19.508514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.279 [2024-11-06 14:12:19.514632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.279 [2024-11-06 14:12:19.514646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.279 [2024-11-06 14:12:19.524751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.279 [2024-11-06 14:12:19.524766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.279 [2024-11-06 14:12:19.530377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.279 [2024-11-06 14:12:19.530391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.279 [2024-11-06 14:12:19.540429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.279 [2024-11-06 14:12:19.540445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.279 19434.50 IOPS, 151.83 MiB/s [2024-11-06T13:12:19.563Z] [2024-11-06 14:12:19.546238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.279 [2024-11-06 14:12:19.546257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.279 [2024-11-06 14:12:19.555971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.279 [2024-11-06 14:12:19.555986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.538 [2024-11-06 14:12:19.564776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.538 [2024-11-06 14:12:19.564790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.538 [2024-11-06 14:12:19.570492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.538 [2024-11-06 14:12:19.570507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.538 [2024-11-06 14:12:19.580849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.538 [2024-11-06 14:12:19.580863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.538 [2024-11-06 14:12:19.586572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.538 [2024-11-06 14:12:19.586586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.538 [2024-11-06 14:12:19.596199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.538 [2024-11-06 14:12:19.596217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.538 [2024-11-06 14:12:19.602085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.538 [2024-11-06 14:12:19.602099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.538 [2024-11-06 14:12:19.612285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.538 [2024-11-06 14:12:19.612300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.538 [2024-11-06 14:12:19.618109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.538 [2024-11-06 14:12:19.618123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.538 [2024-11-06 14:12:19.627835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.538 [2024-11-06 14:12:19.627850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.538 [2024-11-06 14:12:19.637475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.538 [2024-11-06 14:12:19.637490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.538 [2024-11-06 14:12:19.643048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.538 [2024-11-06 14:12:19.643062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.538 [2024-11-06 14:12:19.651782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.538 [2024-11-06 14:12:19.651797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.538 [2024-11-06 14:12:19.661155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.538 [2024-11-06 14:12:19.661170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.538 [2024-11-06 14:12:19.666836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.538 [2024-11-06 14:12:19.666850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.538 [2024-11-06 14:12:19.676372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.538 [2024-11-06 14:12:19.676387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.538 [2024-11-06 14:12:19.682212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.538 [2024-11-06 14:12:19.682226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.538 [2024-11-06 14:12:19.692726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.538 [2024-11-06 14:12:19.692741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.538 [2024-11-06 14:12:19.698600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.538 [2024-11-06 14:12:19.698614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.538 [2024-11-06 14:12:19.708018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.538 [2024-11-06 14:12:19.708033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.538 [2024-11-06 14:12:19.717469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.538 [2024-11-06 14:12:19.717484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.538 [2024-11-06 14:12:19.723251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.538 [2024-11-06 14:12:19.723266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.538 [2024-11-06 14:12:19.731840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.538 [2024-11-06 14:12:19.731854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.538 [2024-11-06 14:12:19.741415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.538 [2024-11-06 14:12:19.741430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.538 [2024-11-06 14:12:19.747027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.538 [2024-11-06 14:12:19.747045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.538 [2024-11-06 14:12:19.756589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.538 [2024-11-06 14:12:19.756604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.538 [2024-11-06 14:12:19.762315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.538 [2024-11-06 14:12:19.762330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.538 [2024-11-06 14:12:19.772585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.538 [2024-11-06 14:12:19.772600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.538 [2024-11-06 14:12:19.778362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.538 [2024-11-06 14:12:19.778376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.538 [2024-11-06 14:12:19.787928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.538 [2024-11-06 14:12:19.787943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.538 [2024-11-06 14:12:19.796719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.538 [2024-11-06 14:12:19.796733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.538 [2024-11-06 14:12:19.802218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.539 [2024-11-06 14:12:19.802232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.539 [2024-11-06 14:12:19.812732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.539 [2024-11-06 14:12:19.812746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.539 [2024-11-06 14:12:19.818336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.539 [2024-11-06 14:12:19.818351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.799 [2024-11-06 14:12:19.828437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.799 [2024-11-06 14:12:19.828452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.799 [2024-11-06 14:12:19.834055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.799 [2024-11-06 14:12:19.834069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.799 [2024-11-06 14:12:19.844677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.799 [2024-11-06 14:12:19.844692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.799 [2024-11-06 14:12:19.850556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.799 [2024-11-06 14:12:19.850571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.799 [2024-11-06 14:12:19.860855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.799 [2024-11-06 14:12:19.860871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.799 [2024-11-06 14:12:19.866704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.799 [2024-11-06 14:12:19.866718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.799 [2024-11-06 14:12:19.876192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.799 [2024-11-06 14:12:19.876207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.799 [2024-11-06 14:12:19.884215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.799 [2024-11-06 14:12:19.884230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.799 [2024-11-06 14:12:19.892762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.799 [2024-11-06 14:12:19.892777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.799 [2024-11-06 14:12:19.898455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.799 [2024-11-06 14:12:19.898469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.799 [2024-11-06 14:12:19.908607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.799 [2024-11-06 14:12:19.908622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.799 [2024-11-06 14:12:19.914504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.799 [2024-11-06 14:12:19.914519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.799 [2024-11-06 14:12:19.924035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.799 [2024-11-06 14:12:19.924050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.799 [2024-11-06 14:12:19.932766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.799 [2024-11-06 14:12:19.932781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.799 [2024-11-06 14:12:19.938502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.799 [2024-11-06 14:12:19.938517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.799 [2024-11-06 14:12:19.948055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.799 [2024-11-06 14:12:19.948070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.799 [2024-11-06 14:12:19.956858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.799 [2024-11-06 14:12:19.956873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.799 [2024-11-06 14:12:19.962533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.799 [2024-11-06 14:12:19.962548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.799 [2024-11-06 14:12:19.972414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.799 [2024-11-06 14:12:19.972429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.799 [2024-11-06 14:12:19.978130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.799 [2024-11-06 14:12:19.978145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.799 [2024-11-06 14:12:19.988226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.799 [2024-11-06 14:12:19.988242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.799 [2024-11-06 14:12:19.996929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.799 [2024-11-06 14:12:19.996944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.799 [2024-11-06 14:12:20.003218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.799 [2024-11-06 14:12:20.003233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.799 [2024-11-06 14:12:20.013614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.799 [2024-11-06 14:12:20.013633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.799 [2024-11-06 14:12:20.019719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.799 [2024-11-06 14:12:20.019738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.799 [2024-11-06 14:12:20.028742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.799 [2024-11-06 14:12:20.028758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.799 [2024-11-06 14:12:20.034612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.799 [2024-11-06 14:12:20.034627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.799 [2024-11-06 14:12:20.044602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.799 [2024-11-06 14:12:20.044621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.799 [2024-11-06 14:12:20.050273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.799 [2024-11-06 14:12:20.050288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.799 [2024-11-06 14:12:20.060704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.799 [2024-11-06 14:12:20.060720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.799 [2024-11-06 14:12:20.066287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.799 [2024-11-06 14:12:20.066303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.799 [2024-11-06 14:12:20.076533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:40.799 [2024-11-06 14:12:20.076548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.060 [2024-11-06 14:12:20.082610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.060 [2024-11-06 14:12:20.082626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.060 [2024-11-06 14:12:20.092537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.060 [2024-11-06 14:12:20.092553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.060 [2024-11-06 14:12:20.098218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.060 [2024-11-06 14:12:20.098233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.060 [2024-11-06 14:12:20.108163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.060 [2024-11-06 14:12:20.108178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.060 [2024-11-06 14:12:20.116902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.060 [2024-11-06 14:12:20.116917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.060 [2024-11-06 14:12:20.122480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.060 [2024-11-06 14:12:20.122495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.060 [2024-11-06 14:12:20.132108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.060 [2024-11-06 14:12:20.132123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.060 [2024-11-06 14:12:20.140682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.060 [2024-11-06 14:12:20.140697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.060 [2024-11-06 14:12:20.146338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.060 [2024-11-06 14:12:20.146352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.060 [2024-11-06 14:12:20.156144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.060 [2024-11-06 14:12:20.156159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.060 [2024-11-06 14:12:20.164854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.060 [2024-11-06 14:12:20.164869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.060 [2024-11-06 14:12:20.170485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.060 [2024-11-06 14:12:20.170500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.060 [2024-11-06 14:12:20.180021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.060 [2024-11-06 14:12:20.180037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.060 [2024-11-06 14:12:20.188802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.060 [2024-11-06 14:12:20.188817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.060 [2024-11-06 14:12:20.194510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.060 [2024-11-06 14:12:20.194525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.060 [2024-11-06 14:12:20.204022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.060 [2024-11-06 14:12:20.204037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.060 [2024-11-06 14:12:20.213362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.060 [2024-11-06 14:12:20.213377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.060 [2024-11-06 14:12:20.219069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.060 [2024-11-06 14:12:20.219084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.060 [2024-11-06 14:12:20.228613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.060 [2024-11-06 14:12:20.228628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.060 [2024-11-06 14:12:20.234566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.060 [2024-11-06 14:12:20.234580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.060 [2024-11-06 14:12:20.244318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.060 [2024-11-06 14:12:20.244333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.060 [2024-11-06 14:12:20.250135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.060 [2024-11-06 14:12:20.250150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.060 [2024-11-06 14:12:20.260417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.060 [2024-11-06 14:12:20.260432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.060 [2024-11-06 14:12:20.266121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.060 [2024-11-06 14:12:20.266136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.060 [2024-11-06 14:12:20.276448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.060 [2024-11-06 14:12:20.276463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.060 [2024-11-06 14:12:20.282184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.060 [2024-11-06 14:12:20.282198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.060 [2024-11-06 14:12:20.291878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.060 [2024-11-06 14:12:20.291893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.060 [2024-11-06 14:12:20.301330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.060 [2024-11-06 14:12:20.301345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.060 [2024-11-06 14:12:20.313968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.060 [2024-11-06 14:12:20.313983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.060 [2024-11-06 14:12:20.325959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.061 [2024-11-06 14:12:20.325973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.061 [2024-11-06 14:12:20.338274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.061 [2024-11-06 14:12:20.338289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.321 [2024-11-06 14:12:20.348474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.321 [2024-11-06 14:12:20.348489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.321 [2024-11-06 14:12:20.354226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.321 [2024-11-06 14:12:20.354241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.321 [2024-11-06 14:12:20.364714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.321 [2024-11-06 14:12:20.364729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.321 [2024-11-06 14:12:20.370361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.321 [2024-11-06 14:12:20.370376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.321 [2024-11-06 14:12:20.380512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.321 [2024-11-06 14:12:20.380527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.321 [2024-11-06 14:12:20.386330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.321 [2024-11-06 14:12:20.386344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.321 [2024-11-06 14:12:20.395834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.321 [2024-11-06 14:12:20.395849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.321 [2024-11-06 14:12:20.404875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.321 [2024-11-06 14:12:20.404890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.321 [2024-11-06 14:12:20.410711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.321 [2024-11-06 14:12:20.410725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.321 [2024-11-06 14:12:20.420167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.321 [2024-11-06 14:12:20.420182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.321 [2024-11-06 14:12:20.428334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.321 [2024-11-06 14:12:20.428348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.321 [2024-11-06 14:12:20.434126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.321 [2024-11-06 14:12:20.434140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.321 [2024-11-06 14:12:20.444070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.321 [2024-11-06 14:12:20.444085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.321 [2024-11-06 14:12:20.453453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.321 [2024-11-06 14:12:20.453467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.321 [2024-11-06 14:12:20.459324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.321 [2024-11-06 14:12:20.459339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.321 [2024-11-06 14:12:20.468488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.321 [2024-11-06 14:12:20.468503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.321 [2024-11-06 14:12:20.474219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.321 [2024-11-06 14:12:20.474233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.321 [2024-11-06 14:12:20.483823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.321 [2024-11-06 14:12:20.483838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.321 [2024-11-06 14:12:20.493263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.321 [2024-11-06 14:12:20.493278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.321 [2024-11-06 14:12:20.499089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.321 [2024-11-06 14:12:20.499104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.321 [2024-11-06 14:12:20.508607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.321 [2024-11-06 14:12:20.508622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.321 [2024-11-06 14:12:20.514172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.321 [2024-11-06 14:12:20.514190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.321 [2024-11-06 14:12:20.523993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.321 [2024-11-06 14:12:20.524008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.321 [2024-11-06 14:12:20.533362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.321 [2024-11-06 14:12:20.533377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.321 19436.80 IOPS, 151.85 MiB/s [2024-11-06T13:12:20.605Z] [2024-11-06 14:12:20.545595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.321 [2024-11-06 14:12:20.545609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.321 00:28:41.321 Latency(us) 00:28:41.321 [2024-11-06T13:12:20.605Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:41.321 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:28:41.321 Nvme1n1 : 5.01 19439.16 151.87 0.00 0.00 6578.77 2088.96 10977.28 00:28:41.321 [2024-11-06T13:12:20.605Z] =================================================================================================================== 00:28:41.321 [2024-11-06T13:12:20.605Z] Total : 19439.16 151.87 0.00 0.00 6578.77 2088.96 10977.28 00:28:41.321 [2024-11-06 14:12:20.553486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.321 [2024-11-06 14:12:20.553500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.321 [2024-11-06 14:12:20.561486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.321 [2024-11-06 14:12:20.561498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.321 [2024-11-06 14:12:20.569485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.321 [2024-11-06 14:12:20.569495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.321 [2024-11-06 14:12:20.577488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.321 [2024-11-06 14:12:20.577500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.321 [2024-11-06 14:12:20.585491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.321 [2024-11-06 14:12:20.585502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.321 [2024-11-06 14:12:20.593482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.321 [2024-11-06 14:12:20.593491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.321 [2024-11-06 14:12:20.601482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.321 [2024-11-06 14:12:20.601493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.582 [2024-11-06 14:12:20.609481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.582 [2024-11-06 14:12:20.609490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.582 [2024-11-06 14:12:20.617481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.582 [2024-11-06 14:12:20.617489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.582 [2024-11-06 14:12:20.625481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.582 [2024-11-06 14:12:20.625489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.582 [2024-11-06 14:12:20.633482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.582 [2024-11-06 14:12:20.633491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.582 [2024-11-06 14:12:20.641482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.582 [2024-11-06 14:12:20.641491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.582 [2024-11-06 14:12:20.649482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:41.582 [2024-11-06 14:12:20.649499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:41.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1120160) - No such process 00:28:41.582 14:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1120160 00:28:41.582 14:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:41.582 14:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.582 14:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:41.582 14:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.582 14:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:41.582 14:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.582 14:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:41.582 delay0 00:28:41.582 14:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.582 14:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:28:41.582 14:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.582 14:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:41.582 14:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.582 14:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:28:41.582 [2024-11-06 14:12:20.724660] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:28:48.151 Initializing NVMe Controllers 00:28:48.151 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:48.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:48.151 Initialization complete. Launching workers. 00:28:48.151 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1422 00:28:48.151 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1708, failed to submit 34 00:28:48.151 success 1556, unsuccessful 152, failed 0 00:28:48.151 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:28:48.151 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:28:48.151 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:48.151 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:28:48.151 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:48.151 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:28:48.151 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:48.151 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:48.151 rmmod nvme_tcp 00:28:48.151 rmmod nvme_fabrics 00:28:48.151 rmmod nvme_keyring 00:28:48.151 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:48.151 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:28:48.151 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:28:48.151 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1117365 ']' 00:28:48.151 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1117365 00:28:48.151 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 1117365 ']' 00:28:48.151 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 1117365 00:28:48.151 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:28:48.151 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:48.151 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1117365 00:28:48.151 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:48.151 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:48.151 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1117365' 00:28:48.151 killing process with pid 1117365 00:28:48.151 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 1117365 00:28:48.151 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 1117365 00:28:48.151 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:48.151 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:48.151 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:48.151 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:28:48.151 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:28:48.151 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:48.151 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:28:48.151 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:48.151 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:48.151 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.151 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:48.151 14:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:50.685 00:28:50.685 real 0m31.207s 00:28:50.685 user 0m41.900s 00:28:50.685 sys 0m9.673s 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:50.685 ************************************ 00:28:50.685 END TEST nvmf_zcopy 00:28:50.685 ************************************ 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:50.685 ************************************ 00:28:50.685 START TEST nvmf_nmic 00:28:50.685 ************************************ 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:28:50.685 * Looking for test storage... 00:28:50.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:50.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.685 --rc genhtml_branch_coverage=1 00:28:50.685 --rc genhtml_function_coverage=1 00:28:50.685 --rc genhtml_legend=1 00:28:50.685 --rc geninfo_all_blocks=1 00:28:50.685 --rc geninfo_unexecuted_blocks=1 00:28:50.685 00:28:50.685 ' 00:28:50.685 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:50.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.685 --rc genhtml_branch_coverage=1 00:28:50.685 --rc genhtml_function_coverage=1 00:28:50.685 --rc genhtml_legend=1 00:28:50.685 --rc geninfo_all_blocks=1 00:28:50.685 --rc geninfo_unexecuted_blocks=1 00:28:50.685 00:28:50.685 ' 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:50.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.686 --rc genhtml_branch_coverage=1 00:28:50.686 --rc genhtml_function_coverage=1 00:28:50.686 --rc genhtml_legend=1 00:28:50.686 --rc geninfo_all_blocks=1 00:28:50.686 --rc geninfo_unexecuted_blocks=1 00:28:50.686 00:28:50.686 ' 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:50.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.686 --rc genhtml_branch_coverage=1 00:28:50.686 --rc genhtml_function_coverage=1 00:28:50.686 --rc genhtml_legend=1 00:28:50.686 --rc geninfo_all_blocks=1 00:28:50.686 --rc geninfo_unexecuted_blocks=1 00:28:50.686 00:28:50.686 ' 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:28:50.686 14:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:55.968 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:55.968 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:55.968 Found net devices under 0000:31:00.0: cvl_0_0 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:55.968 Found net devices under 0000:31:00.1: cvl_0_1 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:28:55.968 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:55.969 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:55.969 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:55.969 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:55.969 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:55.969 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:55.969 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:55.969 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:55.969 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:55.969 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:55.969 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:55.969 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:55.969 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:55.969 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:55.969 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:55.969 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:55.969 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:55.969 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:55.969 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:55.969 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:55.969 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:55.969 14:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:55.969 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:55.969 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:55.969 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:55.969 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:55.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:55.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:28:55.969 00:28:55.969 --- 10.0.0.2 ping statistics --- 00:28:55.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.969 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:28:55.969 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:55.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:55.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:28:55.969 00:28:55.969 --- 10.0.0.1 ping statistics --- 00:28:55.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.969 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:28:55.969 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:55.969 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:28:55.969 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:55.969 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:55.969 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:55.969 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:55.969 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:55.969 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:55.969 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:55.969 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:28:55.969 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:55.969 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:55.969 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:55.969 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1127147 00:28:55.969 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1127147 00:28:55.969 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 1127147 ']' 00:28:55.969 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.969 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:55.969 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.969 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:55.969 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:55.969 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:28:55.969 [2024-11-06 14:12:35.147602] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:55.969 [2024-11-06 14:12:35.148748] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:28:55.969 [2024-11-06 14:12:35.148802] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:55.969 [2024-11-06 14:12:35.245365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:56.229 [2024-11-06 14:12:35.299957] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:56.229 [2024-11-06 14:12:35.300012] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:56.229 [2024-11-06 14:12:35.300020] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:56.229 [2024-11-06 14:12:35.300027] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:56.229 [2024-11-06 14:12:35.300033] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:56.229 [2024-11-06 14:12:35.302118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:56.229 [2024-11-06 14:12:35.302288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:56.229 [2024-11-06 14:12:35.302320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:56.229 [2024-11-06 14:12:35.302328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.229 [2024-11-06 14:12:35.380741] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:56.229 [2024-11-06 14:12:35.381280] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:56.229 [2024-11-06 14:12:35.381286] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:56.229 [2024-11-06 14:12:35.381877] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:56.229 [2024-11-06 14:12:35.381910] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:56.897 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:56.897 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:28:56.897 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:56.897 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:56.897 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:56.897 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:56.897 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:56.897 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.897 14:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:56.897 [2024-11-06 14:12:35.987414] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:56.897 Malloc0 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:56.897 [2024-11-06 14:12:36.063334] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:28:56.897 test case1: single bdev can't be used in multiple subsystems 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:56.897 [2024-11-06 14:12:36.086996] bdev.c:8462:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:28:56.897 [2024-11-06 14:12:36.087021] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:28:56.897 [2024-11-06 14:12:36.087030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.897 request: 00:28:56.897 { 00:28:56.897 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:28:56.897 "namespace": { 00:28:56.897 "bdev_name": "Malloc0", 00:28:56.897 "no_auto_visible": false 00:28:56.897 }, 00:28:56.897 "method": "nvmf_subsystem_add_ns", 00:28:56.897 "req_id": 1 00:28:56.897 } 00:28:56.897 Got JSON-RPC error response 00:28:56.897 response: 00:28:56.897 { 00:28:56.897 "code": -32602, 00:28:56.897 "message": "Invalid parameters" 00:28:56.897 } 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:28:56.897 Adding namespace failed - expected result. 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:28:56.897 test case2: host connect to nvmf target in multiple paths 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:56.897 [2024-11-06 14:12:36.095108] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.897 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:28:57.468 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:28:57.728 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:28:57.728 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:28:57.728 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:28:57.728 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:28:57.728 14:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:28:59.634 14:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:28:59.634 14:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:28:59.634 14:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:28:59.634 14:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:28:59.634 14:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:28:59.634 14:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:28:59.634 14:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:28:59.634 [global] 00:28:59.634 thread=1 00:28:59.634 invalidate=1 00:28:59.634 rw=write 00:28:59.634 time_based=1 00:28:59.634 runtime=1 00:28:59.634 ioengine=libaio 00:28:59.634 direct=1 00:28:59.634 bs=4096 00:28:59.634 iodepth=1 00:28:59.634 norandommap=0 00:28:59.634 numjobs=1 00:28:59.634 00:28:59.634 verify_dump=1 00:28:59.634 verify_backlog=512 00:28:59.634 verify_state_save=0 00:28:59.634 do_verify=1 00:28:59.634 verify=crc32c-intel 00:28:59.634 [job0] 00:28:59.634 filename=/dev/nvme0n1 00:28:59.634 Could not set queue depth (nvme0n1) 00:29:00.207 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:00.207 fio-3.35 00:29:00.207 Starting 1 thread 00:29:01.147 00:29:01.147 job0: (groupid=0, jobs=1): err= 0: pid=1128239: Wed Nov 6 14:12:40 2024 00:29:01.147 read: IOPS=830, BW=3321KiB/s (3400kB/s)(3324KiB/1001msec) 00:29:01.147 slat (nsec): min=6546, max=56773, avg=22204.75, stdev=8353.12 00:29:01.147 clat (usec): min=365, max=1007, avg=688.58, stdev=73.52 00:29:01.147 lat (usec): min=391, max=1033, avg=710.78, stdev=76.13 00:29:01.147 clat percentiles (usec): 00:29:01.147 | 1.00th=[ 519], 5.00th=[ 578], 10.00th=[ 603], 20.00th=[ 627], 00:29:01.147 | 30.00th=[ 660], 40.00th=[ 676], 50.00th=[ 693], 60.00th=[ 709], 00:29:01.147 | 70.00th=[ 717], 80.00th=[ 734], 90.00th=[ 758], 95.00th=[ 783], 00:29:01.147 | 99.00th=[ 930], 99.50th=[ 947], 99.90th=[ 1012], 99.95th=[ 1012], 00:29:01.147 | 99.99th=[ 1012] 00:29:01.147 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:29:01.147 slat (nsec): min=4621, max=54066, avg=25630.42, stdev=10742.47 00:29:01.147 clat (usec): min=125, max=857, avg=362.54, stdev=62.80 00:29:01.147 lat (usec): min=138, max=889, avg=388.17, stdev=65.82 00:29:01.147 clat percentiles (usec): 00:29:01.147 | 1.00th=[ 217], 5.00th=[ 269], 10.00th=[ 285], 20.00th=[ 297], 00:29:01.147 | 30.00th=[ 318], 40.00th=[ 363], 50.00th=[ 383], 60.00th=[ 388], 00:29:01.147 | 70.00th=[ 396], 80.00th=[ 412], 90.00th=[ 429], 95.00th=[ 449], 00:29:01.147 | 99.00th=[ 502], 99.50th=[ 553], 99.90th=[ 594], 99.95th=[ 857], 00:29:01.147 | 99.99th=[ 857] 00:29:01.147 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:29:01.147 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:01.147 lat (usec) : 250=1.56%, 500=53.32%, 750=39.78%, 1000=5.28% 00:29:01.147 lat (msec) : 2=0.05% 00:29:01.147 cpu : usr=2.90%, sys=4.30%, ctx=1855, majf=0, minf=1 00:29:01.147 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:01.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:01.147 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:01.147 issued rwts: total=831,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:01.147 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:01.147 00:29:01.147 Run status group 0 (all jobs): 00:29:01.147 READ: bw=3321KiB/s (3400kB/s), 3321KiB/s-3321KiB/s (3400kB/s-3400kB/s), io=3324KiB (3404kB), run=1001-1001msec 00:29:01.147 WRITE: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:29:01.147 00:29:01.147 Disk stats (read/write): 00:29:01.147 nvme0n1: ios=742/1024, merge=0/0, ticks=518/368, in_queue=886, util=93.59% 00:29:01.147 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:01.407 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:29:01.407 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:01.407 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:29:01.407 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:29:01.407 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:01.407 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:29:01.407 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:01.407 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:29:01.407 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:01.407 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:29:01.407 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:01.407 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:29:01.407 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:01.407 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:29:01.407 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:01.407 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:01.407 rmmod nvme_tcp 00:29:01.407 rmmod nvme_fabrics 00:29:01.407 rmmod nvme_keyring 00:29:01.407 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:01.407 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:29:01.407 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:29:01.407 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1127147 ']' 00:29:01.407 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1127147 00:29:01.407 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 1127147 ']' 00:29:01.407 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 1127147 00:29:01.407 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:29:01.407 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:01.407 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1127147 00:29:01.407 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:01.407 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:01.407 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1127147' 00:29:01.407 killing process with pid 1127147 00:29:01.407 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 1127147 00:29:01.407 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 1127147 00:29:01.407 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:01.407 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:01.407 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:01.407 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:29:01.407 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:29:01.667 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:01.667 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:29:01.667 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:01.668 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:01.668 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:01.668 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:01.668 14:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:03.575 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:03.575 00:29:03.575 real 0m13.277s 00:29:03.575 user 0m33.318s 00:29:03.575 sys 0m5.661s 00:29:03.575 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:03.575 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:03.575 ************************************ 00:29:03.575 END TEST nvmf_nmic 00:29:03.575 ************************************ 00:29:03.575 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:29:03.575 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:03.575 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:03.575 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:03.575 ************************************ 00:29:03.575 START TEST nvmf_fio_target 00:29:03.575 ************************************ 00:29:03.575 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:29:03.575 * Looking for test storage... 00:29:03.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:03.575 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:03.575 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:29:03.575 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:03.834 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:03.834 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:03.834 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:03.834 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:03.834 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:29:03.834 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:29:03.834 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:29:03.834 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:29:03.834 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:29:03.834 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:29:03.834 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:29:03.834 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:03.834 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:29:03.834 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:29:03.834 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:03.834 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:03.834 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:29:03.834 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:29:03.834 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:03.834 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:29:03.834 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:29:03.834 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:29:03.834 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:03.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.835 --rc genhtml_branch_coverage=1 00:29:03.835 --rc genhtml_function_coverage=1 00:29:03.835 --rc genhtml_legend=1 00:29:03.835 --rc geninfo_all_blocks=1 00:29:03.835 --rc geninfo_unexecuted_blocks=1 00:29:03.835 00:29:03.835 ' 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:03.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.835 --rc genhtml_branch_coverage=1 00:29:03.835 --rc genhtml_function_coverage=1 00:29:03.835 --rc genhtml_legend=1 00:29:03.835 --rc geninfo_all_blocks=1 00:29:03.835 --rc geninfo_unexecuted_blocks=1 00:29:03.835 00:29:03.835 ' 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:03.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.835 --rc genhtml_branch_coverage=1 00:29:03.835 --rc genhtml_function_coverage=1 00:29:03.835 --rc genhtml_legend=1 00:29:03.835 --rc geninfo_all_blocks=1 00:29:03.835 --rc geninfo_unexecuted_blocks=1 00:29:03.835 00:29:03.835 ' 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:03.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.835 --rc genhtml_branch_coverage=1 00:29:03.835 --rc genhtml_function_coverage=1 00:29:03.835 --rc genhtml_legend=1 00:29:03.835 --rc geninfo_all_blocks=1 00:29:03.835 --rc geninfo_unexecuted_blocks=1 00:29:03.835 00:29:03.835 ' 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:29:03.835 14:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:09.117 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:09.117 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:09.117 Found net devices under 0000:31:00.0: cvl_0_0 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:09.117 Found net devices under 0000:31:00.1: cvl_0_1 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:09.117 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:09.118 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:09.118 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:09.118 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:09.118 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:09.118 14:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:09.118 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:09.118 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:09.118 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:09.118 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:09.118 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:09.118 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:09.118 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:09.118 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:09.118 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:09.118 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:29:09.118 00:29:09.118 --- 10.0.0.2 ping statistics --- 00:29:09.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.118 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:29:09.118 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:09.118 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:09.118 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:29:09.118 00:29:09.118 --- 10.0.0.1 ping statistics --- 00:29:09.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.118 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:29:09.118 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:09.118 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:29:09.118 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:09.118 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:09.118 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:09.118 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:09.118 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:09.118 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:09.118 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:09.118 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:29:09.118 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:09.118 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:09.118 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:09.118 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1132696 00:29:09.118 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1132696 00:29:09.118 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 1132696 ']' 00:29:09.118 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:09.118 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:09.118 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:09.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:09.118 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:09.118 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:09.118 14:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:29:09.118 [2024-11-06 14:12:48.301601] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:09.118 [2024-11-06 14:12:48.302730] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:29:09.118 [2024-11-06 14:12:48.302781] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:09.118 [2024-11-06 14:12:48.394845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:09.377 [2024-11-06 14:12:48.448590] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:09.377 [2024-11-06 14:12:48.448643] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:09.377 [2024-11-06 14:12:48.448652] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:09.377 [2024-11-06 14:12:48.448660] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:09.377 [2024-11-06 14:12:48.448666] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:09.377 [2024-11-06 14:12:48.450676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.377 [2024-11-06 14:12:48.450842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:09.377 [2024-11-06 14:12:48.451002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:09.377 [2024-11-06 14:12:48.451003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:09.377 [2024-11-06 14:12:48.528717] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:09.377 [2024-11-06 14:12:48.529257] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:09.377 [2024-11-06 14:12:48.529964] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:09.377 [2024-11-06 14:12:48.530135] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:09.377 [2024-11-06 14:12:48.530137] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:09.946 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:09.946 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:29:09.946 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:09.946 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:09.946 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:09.946 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:09.946 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:10.205 [2024-11-06 14:12:49.247803] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:10.205 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:10.205 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:29:10.205 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:10.583 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:29:10.583 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:10.583 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:29:10.583 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:10.846 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:29:10.846 14:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:29:11.106 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:11.106 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:29:11.106 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:11.367 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:29:11.367 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:11.626 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:29:11.626 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:29:11.626 14:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:29:11.886 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:29:11.886 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:12.145 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:29:12.145 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:12.145 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:12.403 [2024-11-06 14:12:51.471722] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:12.404 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:29:12.404 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:29:12.663 14:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:29:12.922 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:29:12.922 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:29:12.922 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:29:12.922 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:29:12.922 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:29:12.922 14:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:29:15.457 14:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:29:15.457 14:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:29:15.457 14:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:29:15.457 14:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:29:15.457 14:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:29:15.457 14:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:29:15.457 14:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:29:15.457 [global] 00:29:15.457 thread=1 00:29:15.457 invalidate=1 00:29:15.457 rw=write 00:29:15.457 time_based=1 00:29:15.457 runtime=1 00:29:15.457 ioengine=libaio 00:29:15.457 direct=1 00:29:15.457 bs=4096 00:29:15.457 iodepth=1 00:29:15.457 norandommap=0 00:29:15.457 numjobs=1 00:29:15.457 00:29:15.457 verify_dump=1 00:29:15.457 verify_backlog=512 00:29:15.457 verify_state_save=0 00:29:15.457 do_verify=1 00:29:15.457 verify=crc32c-intel 00:29:15.457 [job0] 00:29:15.457 filename=/dev/nvme0n1 00:29:15.457 [job1] 00:29:15.457 filename=/dev/nvme0n2 00:29:15.457 [job2] 00:29:15.457 filename=/dev/nvme0n3 00:29:15.457 [job3] 00:29:15.457 filename=/dev/nvme0n4 00:29:15.457 Could not set queue depth (nvme0n1) 00:29:15.458 Could not set queue depth (nvme0n2) 00:29:15.458 Could not set queue depth (nvme0n3) 00:29:15.458 Could not set queue depth (nvme0n4) 00:29:15.458 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:15.458 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:15.458 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:15.458 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:15.458 fio-3.35 00:29:15.458 Starting 4 threads 00:29:16.835 00:29:16.835 job0: (groupid=0, jobs=1): err= 0: pid=1134280: Wed Nov 6 14:12:55 2024 00:29:16.835 read: IOPS=17, BW=71.1KiB/s (72.9kB/s)(72.0KiB/1012msec) 00:29:16.835 slat (nsec): min=3749, max=26971, avg=23876.11, stdev=6271.30 00:29:16.835 clat (usec): min=1155, max=42130, avg=39581.63, stdev=9595.68 00:29:16.835 lat (usec): min=1167, max=42156, avg=39605.51, stdev=9598.94 00:29:16.835 clat percentiles (usec): 00:29:16.835 | 1.00th=[ 1156], 5.00th=[ 1156], 10.00th=[41157], 20.00th=[41681], 00:29:16.835 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:29:16.835 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:29:16.835 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:29:16.835 | 99.99th=[42206] 00:29:16.835 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:29:16.835 slat (nsec): min=4072, max=52454, avg=12669.79, stdev=6818.88 00:29:16.835 clat (usec): min=100, max=1043, avg=568.39, stdev=173.08 00:29:16.835 lat (usec): min=105, max=1065, avg=581.06, stdev=175.00 00:29:16.835 clat percentiles (usec): 00:29:16.835 | 1.00th=[ 131], 5.00th=[ 253], 10.00th=[ 359], 20.00th=[ 429], 00:29:16.835 | 30.00th=[ 482], 40.00th=[ 529], 50.00th=[ 570], 60.00th=[ 619], 00:29:16.835 | 70.00th=[ 652], 80.00th=[ 709], 90.00th=[ 775], 95.00th=[ 840], 00:29:16.835 | 99.00th=[ 996], 99.50th=[ 1029], 99.90th=[ 1045], 99.95th=[ 1045], 00:29:16.835 | 99.99th=[ 1045] 00:29:16.835 bw ( KiB/s): min= 4096, max= 4096, per=40.55%, avg=4096.00, stdev= 0.00, samples=1 00:29:16.835 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:16.835 lat (usec) : 250=4.53%, 500=28.30%, 750=50.57%, 1000=12.26% 00:29:16.835 lat (msec) : 2=1.13%, 50=3.21% 00:29:16.835 cpu : usr=0.30%, sys=0.59%, ctx=532, majf=0, minf=1 00:29:16.835 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:16.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:16.835 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:16.835 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:16.835 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:16.835 job1: (groupid=0, jobs=1): err= 0: pid=1134281: Wed Nov 6 14:12:55 2024 00:29:16.835 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:29:16.835 slat (nsec): min=10815, max=25821, avg=15292.29, stdev=3240.43 00:29:16.835 clat (usec): min=536, max=1305, avg=1002.00, stdev=95.23 00:29:16.835 lat (usec): min=550, max=1323, avg=1017.29, stdev=95.59 00:29:16.835 clat percentiles (usec): 00:29:16.835 | 1.00th=[ 758], 5.00th=[ 840], 10.00th=[ 873], 20.00th=[ 930], 00:29:16.835 | 30.00th=[ 955], 40.00th=[ 988], 50.00th=[ 1004], 60.00th=[ 1029], 00:29:16.835 | 70.00th=[ 1045], 80.00th=[ 1090], 90.00th=[ 1123], 95.00th=[ 1156], 00:29:16.835 | 99.00th=[ 1221], 99.50th=[ 1237], 99.90th=[ 1303], 99.95th=[ 1303], 00:29:16.835 | 99.99th=[ 1303] 00:29:16.835 write: IOPS=766, BW=3065KiB/s (3138kB/s)(3068KiB/1001msec); 0 zone resets 00:29:16.835 slat (nsec): min=3900, max=52157, avg=12960.25, stdev=4010.62 00:29:16.835 clat (usec): min=253, max=1202, avg=605.42, stdev=134.10 00:29:16.835 lat (usec): min=258, max=1216, avg=618.38, stdev=135.50 00:29:16.835 clat percentiles (usec): 00:29:16.835 | 1.00th=[ 297], 5.00th=[ 392], 10.00th=[ 437], 20.00th=[ 490], 00:29:16.835 | 30.00th=[ 529], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 635], 00:29:16.835 | 70.00th=[ 676], 80.00th=[ 717], 90.00th=[ 783], 95.00th=[ 824], 00:29:16.835 | 99.00th=[ 922], 99.50th=[ 955], 99.90th=[ 1205], 99.95th=[ 1205], 00:29:16.835 | 99.99th=[ 1205] 00:29:16.835 bw ( KiB/s): min= 4096, max= 4096, per=40.55%, avg=4096.00, stdev= 0.00, samples=1 00:29:16.835 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:16.835 lat (usec) : 500=13.68%, 750=38.08%, 1000=27.21% 00:29:16.835 lat (msec) : 2=21.03% 00:29:16.835 cpu : usr=0.80%, sys=1.80%, ctx=1279, majf=0, minf=2 00:29:16.835 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:16.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:16.835 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:16.835 issued rwts: total=512,767,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:16.835 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:16.835 job2: (groupid=0, jobs=1): err= 0: pid=1134282: Wed Nov 6 14:12:55 2024 00:29:16.835 read: IOPS=19, BW=77.8KiB/s (79.7kB/s)(80.0KiB/1028msec) 00:29:16.835 slat (nsec): min=4481, max=28403, avg=24435.55, stdev=6584.86 00:29:16.835 clat (usec): min=824, max=42092, avg=37433.50, stdev=12494.82 00:29:16.835 lat (usec): min=828, max=42120, avg=37457.93, stdev=12499.63 00:29:16.835 clat percentiles (usec): 00:29:16.835 | 1.00th=[ 824], 5.00th=[ 824], 10.00th=[ 1029], 20.00th=[40633], 00:29:16.835 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:29:16.835 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:29:16.835 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:29:16.835 | 99.99th=[42206] 00:29:16.835 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:29:16.835 slat (usec): min=3, max=2068, avg=16.93, stdev=91.03 00:29:16.835 clat (usec): min=177, max=901, avg=524.05, stdev=139.56 00:29:16.835 lat (usec): min=181, max=2388, avg=540.98, stdev=162.96 00:29:16.835 clat percentiles (usec): 00:29:16.835 | 1.00th=[ 265], 5.00th=[ 306], 10.00th=[ 322], 20.00th=[ 383], 00:29:16.835 | 30.00th=[ 437], 40.00th=[ 490], 50.00th=[ 537], 60.00th=[ 570], 00:29:16.835 | 70.00th=[ 611], 80.00th=[ 644], 90.00th=[ 701], 95.00th=[ 742], 00:29:16.835 | 99.00th=[ 824], 99.50th=[ 857], 99.90th=[ 906], 99.95th=[ 906], 00:29:16.835 | 99.99th=[ 906] 00:29:16.835 bw ( KiB/s): min= 4096, max= 4096, per=40.55%, avg=4096.00, stdev= 0.00, samples=1 00:29:16.835 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:16.835 lat (usec) : 250=0.56%, 500=39.10%, 750=52.63%, 1000=4.14% 00:29:16.835 lat (msec) : 2=0.19%, 50=3.38% 00:29:16.835 cpu : usr=0.49%, sys=0.97%, ctx=534, majf=0, minf=1 00:29:16.835 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:16.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:16.835 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:16.835 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:16.835 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:16.835 job3: (groupid=0, jobs=1): err= 0: pid=1134283: Wed Nov 6 14:12:55 2024 00:29:16.835 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:29:16.835 slat (nsec): min=3715, max=29026, avg=15960.12, stdev=3802.69 00:29:16.835 clat (usec): min=554, max=1251, avg=986.87, stdev=105.23 00:29:16.835 lat (usec): min=568, max=1278, avg=1002.83, stdev=105.66 00:29:16.835 clat percentiles (usec): 00:29:16.835 | 1.00th=[ 701], 5.00th=[ 783], 10.00th=[ 848], 20.00th=[ 906], 00:29:16.836 | 30.00th=[ 947], 40.00th=[ 971], 50.00th=[ 996], 60.00th=[ 1020], 00:29:16.836 | 70.00th=[ 1045], 80.00th=[ 1074], 90.00th=[ 1106], 95.00th=[ 1139], 00:29:16.836 | 99.00th=[ 1188], 99.50th=[ 1237], 99.90th=[ 1254], 99.95th=[ 1254], 00:29:16.836 | 99.99th=[ 1254] 00:29:16.836 write: IOPS=804, BW=3217KiB/s (3294kB/s)(3220KiB/1001msec); 0 zone resets 00:29:16.836 slat (nsec): min=4004, max=42127, avg=13644.96, stdev=4163.15 00:29:16.836 clat (usec): min=220, max=979, avg=584.48, stdev=133.77 00:29:16.836 lat (usec): min=225, max=993, avg=598.12, stdev=134.89 00:29:16.836 clat percentiles (usec): 00:29:16.836 | 1.00th=[ 281], 5.00th=[ 363], 10.00th=[ 412], 20.00th=[ 469], 00:29:16.836 | 30.00th=[ 510], 40.00th=[ 545], 50.00th=[ 578], 60.00th=[ 619], 00:29:16.836 | 70.00th=[ 660], 80.00th=[ 701], 90.00th=[ 766], 95.00th=[ 807], 00:29:16.836 | 99.00th=[ 889], 99.50th=[ 938], 99.90th=[ 979], 99.95th=[ 979], 00:29:16.836 | 99.99th=[ 979] 00:29:16.836 bw ( KiB/s): min= 4096, max= 4096, per=40.55%, avg=4096.00, stdev= 0.00, samples=1 00:29:16.836 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:16.836 lat (usec) : 250=0.15%, 500=16.10%, 750=38.72%, 1000=26.58% 00:29:16.836 lat (msec) : 2=18.45% 00:29:16.836 cpu : usr=0.40%, sys=2.30%, ctx=1318, majf=0, minf=1 00:29:16.836 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:16.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:16.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:16.836 issued rwts: total=512,805,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:16.836 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:16.836 00:29:16.836 Run status group 0 (all jobs): 00:29:16.836 READ: bw=4132KiB/s (4231kB/s), 71.1KiB/s-2046KiB/s (72.9kB/s-2095kB/s), io=4248KiB (4350kB), run=1001-1028msec 00:29:16.836 WRITE: bw=9.86MiB/s (10.3MB/s), 1992KiB/s-3217KiB/s (2040kB/s-3294kB/s), io=10.1MiB (10.6MB), run=1001-1028msec 00:29:16.836 00:29:16.836 Disk stats (read/write): 00:29:16.836 nvme0n1: ios=38/512, merge=0/0, ticks=1459/280, in_queue=1739, util=96.69% 00:29:16.836 nvme0n2: ios=521/512, merge=0/0, ticks=527/317, in_queue=844, util=87.13% 00:29:16.836 nvme0n3: ios=75/512, merge=0/0, ticks=803/212, in_queue=1015, util=96.93% 00:29:16.836 nvme0n4: ios=569/512, merge=0/0, ticks=814/304, in_queue=1118, util=96.78% 00:29:16.836 14:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:29:16.836 [global] 00:29:16.836 thread=1 00:29:16.836 invalidate=1 00:29:16.836 rw=randwrite 00:29:16.836 time_based=1 00:29:16.836 runtime=1 00:29:16.836 ioengine=libaio 00:29:16.836 direct=1 00:29:16.836 bs=4096 00:29:16.836 iodepth=1 00:29:16.836 norandommap=0 00:29:16.836 numjobs=1 00:29:16.836 00:29:16.836 verify_dump=1 00:29:16.836 verify_backlog=512 00:29:16.836 verify_state_save=0 00:29:16.836 do_verify=1 00:29:16.836 verify=crc32c-intel 00:29:16.836 [job0] 00:29:16.836 filename=/dev/nvme0n1 00:29:16.836 [job1] 00:29:16.836 filename=/dev/nvme0n2 00:29:16.836 [job2] 00:29:16.836 filename=/dev/nvme0n3 00:29:16.836 [job3] 00:29:16.836 filename=/dev/nvme0n4 00:29:16.836 Could not set queue depth (nvme0n1) 00:29:16.836 Could not set queue depth (nvme0n2) 00:29:16.836 Could not set queue depth (nvme0n3) 00:29:16.836 Could not set queue depth (nvme0n4) 00:29:16.836 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:16.836 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:16.836 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:16.836 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:16.836 fio-3.35 00:29:16.836 Starting 4 threads 00:29:18.215 00:29:18.215 job0: (groupid=0, jobs=1): err= 0: pid=1134799: Wed Nov 6 14:12:57 2024 00:29:18.215 read: IOPS=30, BW=121KiB/s (124kB/s)(124KiB/1026msec) 00:29:18.215 slat (nsec): min=3204, max=25449, avg=20350.90, stdev=7154.44 00:29:18.215 clat (usec): min=518, max=41401, avg=29306.19, stdev=18551.80 00:29:18.215 lat (usec): min=543, max=41408, avg=29326.54, stdev=18552.98 00:29:18.215 clat percentiles (usec): 00:29:18.215 | 1.00th=[ 519], 5.00th=[ 553], 10.00th=[ 734], 20.00th=[ 906], 00:29:18.215 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:29:18.215 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:29:18.215 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:29:18.215 | 99.99th=[41157] 00:29:18.215 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:29:18.215 slat (nsec): min=3704, max=76692, avg=5114.89, stdev=4232.94 00:29:18.215 clat (usec): min=80, max=710, avg=221.68, stdev=104.83 00:29:18.215 lat (usec): min=85, max=742, avg=226.80, stdev=106.41 00:29:18.215 clat percentiles (usec): 00:29:18.215 | 1.00th=[ 86], 5.00th=[ 90], 10.00th=[ 93], 20.00th=[ 102], 00:29:18.215 | 30.00th=[ 129], 40.00th=[ 223], 50.00th=[ 251], 60.00th=[ 262], 00:29:18.215 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 302], 95.00th=[ 383], 00:29:18.215 | 99.00th=[ 603], 99.50th=[ 619], 99.90th=[ 709], 99.95th=[ 709], 00:29:18.215 | 99.99th=[ 709] 00:29:18.215 bw ( KiB/s): min= 4096, max= 4096, per=41.32%, avg=4096.00, stdev= 0.00, samples=1 00:29:18.215 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:18.215 lat (usec) : 100=17.31%, 250=28.91%, 500=45.67%, 750=3.13%, 1000=0.92% 00:29:18.215 lat (msec) : 50=4.05% 00:29:18.215 cpu : usr=0.20%, sys=0.20%, ctx=544, majf=0, minf=1 00:29:18.215 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:18.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.215 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.215 issued rwts: total=31,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:18.215 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:18.215 job1: (groupid=0, jobs=1): err= 0: pid=1134800: Wed Nov 6 14:12:57 2024 00:29:18.215 read: IOPS=18, BW=75.0KiB/s (76.7kB/s)(76.0KiB/1014msec) 00:29:18.215 slat (nsec): min=11132, max=26838, avg=24934.16, stdev=4101.95 00:29:18.215 clat (usec): min=40848, max=42047, avg=41858.38, stdev=318.98 00:29:18.215 lat (usec): min=40875, max=42067, avg=41883.31, stdev=318.17 00:29:18.215 clat percentiles (usec): 00:29:18.215 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41681], 00:29:18.215 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:29:18.215 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:29:18.215 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:29:18.215 | 99.99th=[42206] 00:29:18.215 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:29:18.215 slat (nsec): min=3781, max=22864, avg=12557.98, stdev=3897.77 00:29:18.215 clat (usec): min=93, max=785, avg=403.81, stdev=111.89 00:29:18.215 lat (usec): min=98, max=800, avg=416.37, stdev=112.86 00:29:18.215 clat percentiles (usec): 00:29:18.215 | 1.00th=[ 182], 5.00th=[ 253], 10.00th=[ 277], 20.00th=[ 302], 00:29:18.215 | 30.00th=[ 326], 40.00th=[ 359], 50.00th=[ 404], 60.00th=[ 429], 00:29:18.215 | 70.00th=[ 457], 80.00th=[ 490], 90.00th=[ 562], 95.00th=[ 603], 00:29:18.215 | 99.00th=[ 693], 99.50th=[ 717], 99.90th=[ 783], 99.95th=[ 783], 00:29:18.215 | 99.99th=[ 783] 00:29:18.215 bw ( KiB/s): min= 4096, max= 4096, per=41.32%, avg=4096.00, stdev= 0.00, samples=1 00:29:18.215 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:18.215 lat (usec) : 100=0.19%, 250=4.52%, 500=73.26%, 750=18.08%, 1000=0.38% 00:29:18.215 lat (msec) : 50=3.58% 00:29:18.215 cpu : usr=0.30%, sys=0.49%, ctx=533, majf=0, minf=1 00:29:18.215 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:18.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.215 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.215 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:18.215 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:18.215 job2: (groupid=0, jobs=1): err= 0: pid=1134803: Wed Nov 6 14:12:57 2024 00:29:18.215 read: IOPS=24, BW=96.8KiB/s (99.1kB/s)(100KiB/1033msec) 00:29:18.215 slat (nsec): min=12082, max=28282, avg=25146.32, stdev=5626.06 00:29:18.215 clat (usec): min=798, max=42041, avg=32010.21, stdev=17839.43 00:29:18.215 lat (usec): min=815, max=42070, avg=32035.36, stdev=17842.35 00:29:18.215 clat percentiles (usec): 00:29:18.215 | 1.00th=[ 799], 5.00th=[ 889], 10.00th=[ 889], 20.00th=[ 979], 00:29:18.215 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:29:18.215 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:29:18.215 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:29:18.215 | 99.99th=[42206] 00:29:18.215 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:29:18.215 slat (nsec): min=3508, max=30979, avg=13640.63, stdev=4147.28 00:29:18.215 clat (usec): min=101, max=813, avg=428.62, stdev=141.70 00:29:18.215 lat (usec): min=115, max=829, avg=442.26, stdev=142.62 00:29:18.215 clat percentiles (usec): 00:29:18.215 | 1.00th=[ 119], 5.00th=[ 196], 10.00th=[ 241], 20.00th=[ 302], 00:29:18.215 | 30.00th=[ 347], 40.00th=[ 396], 50.00th=[ 429], 60.00th=[ 461], 00:29:18.215 | 70.00th=[ 502], 80.00th=[ 562], 90.00th=[ 619], 95.00th=[ 652], 00:29:18.215 | 99.00th=[ 758], 99.50th=[ 816], 99.90th=[ 816], 99.95th=[ 816], 00:29:18.215 | 99.99th=[ 816] 00:29:18.215 bw ( KiB/s): min= 4096, max= 4096, per=41.32%, avg=4096.00, stdev= 0.00, samples=1 00:29:18.215 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:18.215 lat (usec) : 250=10.99%, 500=54.56%, 750=28.68%, 1000=2.05% 00:29:18.215 lat (msec) : 2=0.19%, 50=3.54% 00:29:18.215 cpu : usr=0.48%, sys=1.16%, ctx=538, majf=0, minf=1 00:29:18.215 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:18.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.215 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.215 issued rwts: total=25,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:18.215 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:18.215 job3: (groupid=0, jobs=1): err= 0: pid=1134804: Wed Nov 6 14:12:57 2024 00:29:18.215 read: IOPS=622, BW=2490KiB/s (2549kB/s)(2492KiB/1001msec) 00:29:18.215 slat (nsec): min=2724, max=45156, avg=14558.51, stdev=6430.01 00:29:18.215 clat (usec): min=346, max=1040, avg=785.07, stdev=122.80 00:29:18.215 lat (usec): min=349, max=1067, avg=799.63, stdev=125.78 00:29:18.215 clat percentiles (usec): 00:29:18.215 | 1.00th=[ 506], 5.00th=[ 594], 10.00th=[ 627], 20.00th=[ 676], 00:29:18.215 | 30.00th=[ 701], 40.00th=[ 734], 50.00th=[ 791], 60.00th=[ 848], 00:29:18.215 | 70.00th=[ 873], 80.00th=[ 898], 90.00th=[ 938], 95.00th=[ 963], 00:29:18.215 | 99.00th=[ 1012], 99.50th=[ 1029], 99.90th=[ 1045], 99.95th=[ 1045], 00:29:18.215 | 99.99th=[ 1045] 00:29:18.215 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:29:18.216 slat (nsec): min=3519, max=60000, avg=19151.56, stdev=11516.28 00:29:18.216 clat (usec): min=158, max=4108, avg=460.59, stdev=154.55 00:29:18.216 lat (usec): min=166, max=4133, avg=479.74, stdev=159.31 00:29:18.216 clat percentiles (usec): 00:29:18.216 | 1.00th=[ 258], 5.00th=[ 302], 10.00th=[ 318], 20.00th=[ 363], 00:29:18.216 | 30.00th=[ 404], 40.00th=[ 429], 50.00th=[ 449], 60.00th=[ 482], 00:29:18.216 | 70.00th=[ 506], 80.00th=[ 537], 90.00th=[ 611], 95.00th=[ 652], 00:29:18.216 | 99.00th=[ 693], 99.50th=[ 709], 99.90th=[ 734], 99.95th=[ 4113], 00:29:18.216 | 99.99th=[ 4113] 00:29:18.216 bw ( KiB/s): min= 4096, max= 4096, per=41.32%, avg=4096.00, stdev= 0.00, samples=1 00:29:18.216 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:18.216 lat (usec) : 250=0.49%, 500=42.14%, 750=35.70%, 1000=20.95% 00:29:18.216 lat (msec) : 2=0.67%, 10=0.06% 00:29:18.216 cpu : usr=1.70%, sys=4.40%, ctx=1649, majf=0, minf=1 00:29:18.216 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:18.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.216 issued rwts: total=623,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:18.216 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:18.216 00:29:18.216 Run status group 0 (all jobs): 00:29:18.216 READ: bw=2703KiB/s (2768kB/s), 75.0KiB/s-2490KiB/s (76.7kB/s-2549kB/s), io=2792KiB (2859kB), run=1001-1033msec 00:29:18.216 WRITE: bw=9913KiB/s (10.1MB/s), 1983KiB/s-4092KiB/s (2030kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1033msec 00:29:18.216 00:29:18.216 Disk stats (read/write): 00:29:18.216 nvme0n1: ios=76/512, merge=0/0, ticks=752/112, in_queue=864, util=86.67% 00:29:18.216 nvme0n2: ios=64/512, merge=0/0, ticks=1099/201, in_queue=1300, util=96.64% 00:29:18.216 nvme0n3: ios=57/512, merge=0/0, ticks=798/163, in_queue=961, util=96.20% 00:29:18.216 nvme0n4: ios=535/900, merge=0/0, ticks=1295/321, in_queue=1616, util=96.15% 00:29:18.216 14:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:29:18.216 [global] 00:29:18.216 thread=1 00:29:18.216 invalidate=1 00:29:18.216 rw=write 00:29:18.216 time_based=1 00:29:18.216 runtime=1 00:29:18.216 ioengine=libaio 00:29:18.216 direct=1 00:29:18.216 bs=4096 00:29:18.216 iodepth=128 00:29:18.216 norandommap=0 00:29:18.216 numjobs=1 00:29:18.216 00:29:18.216 verify_dump=1 00:29:18.216 verify_backlog=512 00:29:18.216 verify_state_save=0 00:29:18.216 do_verify=1 00:29:18.216 verify=crc32c-intel 00:29:18.216 [job0] 00:29:18.216 filename=/dev/nvme0n1 00:29:18.216 [job1] 00:29:18.216 filename=/dev/nvme0n2 00:29:18.216 [job2] 00:29:18.216 filename=/dev/nvme0n3 00:29:18.216 [job3] 00:29:18.216 filename=/dev/nvme0n4 00:29:18.216 Could not set queue depth (nvme0n1) 00:29:18.216 Could not set queue depth (nvme0n2) 00:29:18.216 Could not set queue depth (nvme0n3) 00:29:18.216 Could not set queue depth (nvme0n4) 00:29:18.474 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:18.474 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:18.474 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:18.474 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:18.474 fio-3.35 00:29:18.474 Starting 4 threads 00:29:19.853 00:29:19.853 job0: (groupid=0, jobs=1): err= 0: pid=1135329: Wed Nov 6 14:12:58 2024 00:29:19.854 read: IOPS=4756, BW=18.6MiB/s (19.5MB/s)(18.7MiB/1008msec) 00:29:19.854 slat (nsec): min=924, max=11791k, avg=75622.64, stdev=654158.26 00:29:19.854 clat (usec): min=1136, max=38482, avg=10372.85, stdev=5239.74 00:29:19.854 lat (usec): min=1143, max=38509, avg=10448.47, stdev=5302.12 00:29:19.854 clat percentiles (usec): 00:29:19.854 | 1.00th=[ 2343], 5.00th=[ 6063], 10.00th=[ 6783], 20.00th=[ 7635], 00:29:19.854 | 30.00th=[ 8029], 40.00th=[ 8291], 50.00th=[ 8586], 60.00th=[ 8979], 00:29:19.854 | 70.00th=[ 9503], 80.00th=[13566], 90.00th=[16057], 95.00th=[22152], 00:29:19.854 | 99.00th=[30016], 99.50th=[31851], 99.90th=[35390], 99.95th=[35390], 00:29:19.854 | 99.99th=[38536] 00:29:19.854 write: IOPS=6603, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1008msec); 0 zone resets 00:29:19.854 slat (nsec): min=1613, max=15027k, avg=69068.25, stdev=573152.11 00:29:19.854 clat (usec): min=488, max=125570, avg=11754.07, stdev=14616.31 00:29:19.854 lat (usec): min=504, max=125577, avg=11823.14, stdev=14674.27 00:29:19.854 clat percentiles (usec): 00:29:19.854 | 1.00th=[ 971], 5.00th=[ 1450], 10.00th=[ 2376], 20.00th=[ 4228], 00:29:19.854 | 30.00th=[ 5276], 40.00th=[ 6325], 50.00th=[ 7701], 60.00th=[ 9634], 00:29:19.854 | 70.00th=[ 11207], 80.00th=[ 15008], 90.00th=[ 22938], 95.00th=[ 36963], 00:29:19.854 | 99.00th=[ 94897], 99.50th=[116917], 99.90th=[125305], 99.95th=[125305], 00:29:19.854 | 99.99th=[125305] 00:29:19.854 bw ( KiB/s): min=20056, max=32768, per=26.00%, avg=26412.00, stdev=8988.74, samples=2 00:29:19.854 iops : min= 5014, max= 8192, avg=6603.00, stdev=2247.19, samples=2 00:29:19.854 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.62% 00:29:19.854 lat (msec) : 2=4.08%, 4=7.52%, 10=54.41%, 20=23.43%, 50=8.91% 00:29:19.854 lat (msec) : 100=0.42%, 250=0.54% 00:29:19.854 cpu : usr=2.58%, sys=3.57%, ctx=520, majf=0, minf=1 00:29:19.854 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:29:19.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:19.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:19.854 issued rwts: total=4795,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:19.854 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:19.854 job1: (groupid=0, jobs=1): err= 0: pid=1135330: Wed Nov 6 14:12:58 2024 00:29:19.854 read: IOPS=6957, BW=27.2MiB/s (28.5MB/s)(27.3MiB/1004msec) 00:29:19.854 slat (nsec): min=899, max=9104.0k, avg=67012.04, stdev=544507.76 00:29:19.854 clat (usec): min=2284, max=41339, avg=8350.44, stdev=3683.80 00:29:19.854 lat (usec): min=3092, max=41343, avg=8417.45, stdev=3741.93 00:29:19.854 clat percentiles (usec): 00:29:19.854 | 1.00th=[ 3884], 5.00th=[ 5604], 10.00th=[ 6128], 20.00th=[ 6456], 00:29:19.854 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7439], 00:29:19.854 | 70.00th=[ 7963], 80.00th=[ 9372], 90.00th=[12649], 95.00th=[15926], 00:29:19.854 | 99.00th=[23200], 99.50th=[32637], 99.90th=[40633], 99.95th=[41157], 00:29:19.854 | 99.99th=[41157] 00:29:19.854 write: IOPS=7139, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1004msec); 0 zone resets 00:29:19.854 slat (nsec): min=1580, max=10783k, avg=67674.44, stdev=505184.09 00:29:19.854 clat (usec): min=805, max=90448, avg=9639.19, stdev=12485.02 00:29:19.854 lat (usec): min=813, max=90452, avg=9706.87, stdev=12567.69 00:29:19.854 clat percentiles (usec): 00:29:19.854 | 1.00th=[ 1958], 5.00th=[ 3982], 10.00th=[ 4228], 20.00th=[ 4752], 00:29:19.854 | 30.00th=[ 5932], 40.00th=[ 6325], 50.00th=[ 6783], 60.00th=[ 7046], 00:29:19.854 | 70.00th=[ 7242], 80.00th=[ 8848], 90.00th=[15139], 95.00th=[26870], 00:29:19.854 | 99.00th=[78119], 99.50th=[83362], 99.90th=[88605], 99.95th=[90702], 00:29:19.854 | 99.99th=[90702] 00:29:19.854 bw ( KiB/s): min=20480, max=36864, per=28.22%, avg=28672.00, stdev=11585.24, samples=2 00:29:19.854 iops : min= 5120, max= 9216, avg=7168.00, stdev=2896.31, samples=2 00:29:19.854 lat (usec) : 1000=0.11% 00:29:19.854 lat (msec) : 2=0.44%, 4=2.89%, 10=80.63%, 20=12.11%, 50=2.31% 00:29:19.854 lat (msec) : 100=1.51% 00:29:19.854 cpu : usr=2.59%, sys=4.09%, ctx=500, majf=0, minf=2 00:29:19.854 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:29:19.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:19.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:19.854 issued rwts: total=6985,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:19.854 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:19.854 job2: (groupid=0, jobs=1): err= 0: pid=1135331: Wed Nov 6 14:12:58 2024 00:29:19.854 read: IOPS=6356, BW=24.8MiB/s (26.0MB/s)(25.0MiB/1005msec) 00:29:19.854 slat (nsec): min=1081, max=12941k, avg=80464.38, stdev=556044.09 00:29:19.854 clat (usec): min=937, max=30256, avg=9654.33, stdev=3527.76 00:29:19.854 lat (usec): min=3611, max=30262, avg=9734.79, stdev=3567.44 00:29:19.854 clat percentiles (usec): 00:29:19.854 | 1.00th=[ 5800], 5.00th=[ 6325], 10.00th=[ 6849], 20.00th=[ 7308], 00:29:19.854 | 30.00th=[ 7963], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 9372], 00:29:19.854 | 70.00th=[ 9765], 80.00th=[10421], 90.00th=[13435], 95.00th=[16581], 00:29:19.854 | 99.00th=[24249], 99.50th=[26346], 99.90th=[28443], 99.95th=[30278], 00:29:19.854 | 99.99th=[30278] 00:29:19.854 write: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec); 0 zone resets 00:29:19.854 slat (nsec): min=1770, max=9028.1k, avg=70823.74, stdev=440402.43 00:29:19.854 clat (usec): min=2715, max=41695, avg=9868.94, stdev=5885.08 00:29:19.854 lat (usec): min=2718, max=41700, avg=9939.76, stdev=5928.43 00:29:19.854 clat percentiles (usec): 00:29:19.854 | 1.00th=[ 4948], 5.00th=[ 5866], 10.00th=[ 6718], 20.00th=[ 7767], 00:29:19.854 | 30.00th=[ 8094], 40.00th=[ 8225], 50.00th=[ 8356], 60.00th=[ 8455], 00:29:19.854 | 70.00th=[ 8717], 80.00th=[ 9896], 90.00th=[13042], 95.00th=[20579], 00:29:19.854 | 99.00th=[38536], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:29:19.854 | 99.99th=[41681] 00:29:19.854 bw ( KiB/s): min=23920, max=29328, per=26.21%, avg=26624.00, stdev=3824.03, samples=2 00:29:19.854 iops : min= 5980, max= 7332, avg=6656.00, stdev=956.01, samples=2 00:29:19.854 lat (usec) : 1000=0.01% 00:29:19.854 lat (msec) : 4=0.21%, 10=77.64%, 20=17.78%, 50=4.35% 00:29:19.854 cpu : usr=2.39%, sys=3.88%, ctx=710, majf=0, minf=1 00:29:19.854 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:29:19.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:19.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:19.854 issued rwts: total=6388,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:19.854 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:19.854 job3: (groupid=0, jobs=1): err= 0: pid=1135332: Wed Nov 6 14:12:58 2024 00:29:19.854 read: IOPS=5011, BW=19.6MiB/s (20.5MB/s)(19.7MiB/1006msec) 00:29:19.854 slat (nsec): min=934, max=17615k, avg=98883.25, stdev=836169.23 00:29:19.854 clat (usec): min=2914, max=54590, avg=12031.63, stdev=7256.84 00:29:19.854 lat (usec): min=2917, max=54594, avg=12130.51, stdev=7320.97 00:29:19.854 clat percentiles (usec): 00:29:19.854 | 1.00th=[ 5014], 5.00th=[ 6063], 10.00th=[ 6259], 20.00th=[ 6783], 00:29:19.854 | 30.00th=[ 7832], 40.00th=[ 8848], 50.00th=[ 9634], 60.00th=[10159], 00:29:19.854 | 70.00th=[13042], 80.00th=[16319], 90.00th=[20841], 95.00th=[24773], 00:29:19.854 | 99.00th=[44827], 99.50th=[52167], 99.90th=[54264], 99.95th=[54789], 00:29:19.854 | 99.99th=[54789] 00:29:19.854 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:29:19.854 slat (nsec): min=1658, max=11015k, avg=95481.02, stdev=586637.76 00:29:19.854 clat (usec): min=913, max=74890, avg=13073.74, stdev=11749.08 00:29:19.854 lat (usec): min=923, max=74893, avg=13169.22, stdev=11820.43 00:29:19.854 clat percentiles (usec): 00:29:19.854 | 1.00th=[ 3425], 5.00th=[ 4752], 10.00th=[ 5538], 20.00th=[ 6521], 00:29:19.854 | 30.00th=[ 6980], 40.00th=[ 7832], 50.00th=[ 8586], 60.00th=[10552], 00:29:19.854 | 70.00th=[13960], 80.00th=[15533], 90.00th=[27395], 95.00th=[38536], 00:29:19.854 | 99.00th=[68682], 99.50th=[73925], 99.90th=[74974], 99.95th=[74974], 00:29:19.854 | 99.99th=[74974] 00:29:19.854 bw ( KiB/s): min=13808, max=27152, per=20.16%, avg=20480.00, stdev=9435.63, samples=2 00:29:19.854 iops : min= 3452, max= 6788, avg=5120.00, stdev=2358.91, samples=2 00:29:19.854 lat (usec) : 1000=0.02% 00:29:19.854 lat (msec) : 4=1.48%, 10=55.20%, 20=30.97%, 50=10.72%, 100=1.62% 00:29:19.854 cpu : usr=2.29%, sys=2.29%, ctx=459, majf=0, minf=2 00:29:19.854 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:29:19.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:19.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:19.854 issued rwts: total=5042,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:19.854 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:19.854 00:29:19.854 Run status group 0 (all jobs): 00:29:19.854 READ: bw=89.9MiB/s (94.3MB/s), 18.6MiB/s-27.2MiB/s (19.5MB/s-28.5MB/s), io=90.7MiB (95.1MB), run=1004-1008msec 00:29:19.854 WRITE: bw=99.2MiB/s (104MB/s), 19.9MiB/s-27.9MiB/s (20.8MB/s-29.2MB/s), io=100MiB (105MB), run=1004-1008msec 00:29:19.854 00:29:19.854 Disk stats (read/write): 00:29:19.854 nvme0n1: ios=3862/5632, merge=0/0, ticks=40329/58281, in_queue=98610, util=96.39% 00:29:19.854 nvme0n2: ios=5112/5127, merge=0/0, ticks=42698/54884, in_queue=97582, util=95.97% 00:29:19.854 nvme0n3: ios=4896/5120, merge=0/0, ticks=34082/37993, in_queue=72075, util=96.16% 00:29:19.854 nvme0n4: ios=4096/4215, merge=0/0, ticks=48817/46775, in_queue=95592, util=88.89% 00:29:19.854 14:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:29:19.854 [global] 00:29:19.854 thread=1 00:29:19.854 invalidate=1 00:29:19.854 rw=randwrite 00:29:19.854 time_based=1 00:29:19.854 runtime=1 00:29:19.854 ioengine=libaio 00:29:19.854 direct=1 00:29:19.854 bs=4096 00:29:19.854 iodepth=128 00:29:19.854 norandommap=0 00:29:19.854 numjobs=1 00:29:19.854 00:29:19.854 verify_dump=1 00:29:19.854 verify_backlog=512 00:29:19.854 verify_state_save=0 00:29:19.854 do_verify=1 00:29:19.854 verify=crc32c-intel 00:29:19.854 [job0] 00:29:19.854 filename=/dev/nvme0n1 00:29:19.854 [job1] 00:29:19.854 filename=/dev/nvme0n2 00:29:19.854 [job2] 00:29:19.854 filename=/dev/nvme0n3 00:29:19.854 [job3] 00:29:19.854 filename=/dev/nvme0n4 00:29:19.854 Could not set queue depth (nvme0n1) 00:29:19.854 Could not set queue depth (nvme0n2) 00:29:19.854 Could not set queue depth (nvme0n3) 00:29:19.854 Could not set queue depth (nvme0n4) 00:29:20.113 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:20.113 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:20.113 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:20.113 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:20.113 fio-3.35 00:29:20.113 Starting 4 threads 00:29:21.494 00:29:21.494 job0: (groupid=0, jobs=1): err= 0: pid=1135846: Wed Nov 6 14:13:00 2024 00:29:21.494 read: IOPS=7657, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1003msec) 00:29:21.494 slat (nsec): min=891, max=13813k, avg=65661.50, stdev=400933.88 00:29:21.494 clat (usec): min=1721, max=23214, avg=8368.45, stdev=2041.35 00:29:21.494 lat (usec): min=1746, max=23221, avg=8434.12, stdev=2067.50 00:29:21.494 clat percentiles (usec): 00:29:21.494 | 1.00th=[ 3228], 5.00th=[ 5997], 10.00th=[ 6652], 20.00th=[ 7242], 00:29:21.494 | 30.00th=[ 7635], 40.00th=[ 8029], 50.00th=[ 8225], 60.00th=[ 8455], 00:29:21.494 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[10028], 95.00th=[11469], 00:29:21.494 | 99.00th=[16909], 99.50th=[21627], 99.90th=[23200], 99.95th=[23200], 00:29:21.494 | 99.99th=[23200] 00:29:21.494 write: IOPS=8226, BW=32.1MiB/s (33.7MB/s)(32.2MiB/1003msec); 0 zone resets 00:29:21.494 slat (nsec): min=1474, max=8618.4k, avg=51362.70, stdev=289472.20 00:29:21.494 clat (usec): min=209, max=70427, avg=7629.51, stdev=6279.01 00:29:21.494 lat (usec): min=241, max=70435, avg=7680.87, stdev=6297.72 00:29:21.494 clat percentiles (usec): 00:29:21.494 | 1.00th=[ 824], 5.00th=[ 2278], 10.00th=[ 4555], 20.00th=[ 6456], 00:29:21.494 | 30.00th=[ 6849], 40.00th=[ 7046], 50.00th=[ 7177], 60.00th=[ 7308], 00:29:21.494 | 70.00th=[ 7570], 80.00th=[ 7898], 90.00th=[ 8455], 95.00th=[ 9372], 00:29:21.494 | 99.00th=[47973], 99.50th=[61604], 99.90th=[68682], 99.95th=[68682], 00:29:21.494 | 99.99th=[70779] 00:29:21.494 bw ( KiB/s): min=32768, max=32768, per=34.09%, avg=32768.00, stdev= 0.00, samples=2 00:29:21.494 iops : min= 8192, max= 8192, avg=8192.00, stdev= 0.00, samples=2 00:29:21.494 lat (usec) : 250=0.01%, 500=0.09%, 750=0.27%, 1000=0.30% 00:29:21.494 lat (msec) : 2=1.53%, 4=2.54%, 10=88.71%, 20=5.43%, 50=0.63% 00:29:21.494 lat (msec) : 100=0.49% 00:29:21.494 cpu : usr=3.59%, sys=4.79%, ctx=955, majf=0, minf=1 00:29:21.494 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:29:21.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:21.494 issued rwts: total=7680,8251,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.494 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.494 job1: (groupid=0, jobs=1): err= 0: pid=1135847: Wed Nov 6 14:13:00 2024 00:29:21.494 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:29:21.494 slat (nsec): min=902, max=14327k, avg=103126.48, stdev=787574.58 00:29:21.494 clat (usec): min=4662, max=53019, avg=13644.79, stdev=7543.55 00:29:21.494 lat (usec): min=4668, max=55660, avg=13747.92, stdev=7607.28 00:29:21.494 clat percentiles (usec): 00:29:21.494 | 1.00th=[ 4686], 5.00th=[ 7898], 10.00th=[ 8848], 20.00th=[ 9372], 00:29:21.494 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10421], 60.00th=[11207], 00:29:21.494 | 70.00th=[12780], 80.00th=[17171], 90.00th=[23200], 95.00th=[30016], 00:29:21.494 | 99.00th=[44303], 99.50th=[44303], 99.90th=[50594], 99.95th=[50594], 00:29:21.494 | 99.99th=[53216] 00:29:21.494 write: IOPS=4317, BW=16.9MiB/s (17.7MB/s)(17.0MiB/1006msec); 0 zone resets 00:29:21.495 slat (nsec): min=1536, max=22633k, avg=129093.31, stdev=789789.47 00:29:21.495 clat (usec): min=1200, max=71619, avg=16521.69, stdev=13415.15 00:29:21.495 lat (usec): min=1211, max=72876, avg=16650.79, stdev=13494.13 00:29:21.495 clat percentiles (usec): 00:29:21.495 | 1.00th=[ 5735], 5.00th=[ 6194], 10.00th=[ 8029], 20.00th=[ 8586], 00:29:21.495 | 30.00th=[ 9241], 40.00th=[ 9765], 50.00th=[11076], 60.00th=[14353], 00:29:21.495 | 70.00th=[15270], 80.00th=[18220], 90.00th=[36439], 95.00th=[44303], 00:29:21.495 | 99.00th=[69731], 99.50th=[70779], 99.90th=[71828], 99.95th=[71828], 00:29:21.495 | 99.99th=[71828] 00:29:21.495 bw ( KiB/s): min=15624, max=18104, per=17.54%, avg=16864.00, stdev=1753.62, samples=2 00:29:21.495 iops : min= 3906, max= 4526, avg=4216.00, stdev=438.41, samples=2 00:29:21.495 lat (msec) : 2=0.02%, 10=41.24%, 20=41.09%, 50=15.38%, 100=2.26% 00:29:21.495 cpu : usr=2.09%, sys=3.78%, ctx=359, majf=0, minf=1 00:29:21.495 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:29:21.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.495 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:21.495 issued rwts: total=4096,4343,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.495 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.495 job2: (groupid=0, jobs=1): err= 0: pid=1135852: Wed Nov 6 14:13:00 2024 00:29:21.495 read: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec) 00:29:21.495 slat (nsec): min=969, max=12019k, avg=104073.62, stdev=780659.25 00:29:21.495 clat (usec): min=3312, max=41631, avg=12055.95, stdev=5126.66 00:29:21.495 lat (usec): min=3314, max=41636, avg=12160.02, stdev=5194.38 00:29:21.495 clat percentiles (usec): 00:29:21.495 | 1.00th=[ 4621], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 8979], 00:29:21.495 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[10421], 60.00th=[11338], 00:29:21.495 | 70.00th=[12125], 80.00th=[14353], 90.00th=[16712], 95.00th=[22938], 00:29:21.495 | 99.00th=[33817], 99.50th=[36439], 99.90th=[41681], 99.95th=[41681], 00:29:21.495 | 99.99th=[41681] 00:29:21.495 write: IOPS=4771, BW=18.6MiB/s (19.5MB/s)(18.8MiB/1009msec); 0 zone resets 00:29:21.495 slat (nsec): min=1595, max=9433.2k, avg=104602.86, stdev=539240.90 00:29:21.495 clat (usec): min=859, max=41615, avg=15042.97, stdev=8237.27 00:29:21.495 lat (usec): min=867, max=41617, avg=15147.57, stdev=8292.51 00:29:21.495 clat percentiles (usec): 00:29:21.495 | 1.00th=[ 3752], 5.00th=[ 5669], 10.00th=[ 6194], 20.00th=[ 7701], 00:29:21.495 | 30.00th=[ 8848], 40.00th=[10421], 50.00th=[12911], 60.00th=[14877], 00:29:21.495 | 70.00th=[17695], 80.00th=[25035], 90.00th=[28181], 95.00th=[29230], 00:29:21.495 | 99.00th=[32900], 99.50th=[34866], 99.90th=[34866], 99.95th=[36439], 00:29:21.495 | 99.99th=[41681] 00:29:21.495 bw ( KiB/s): min=17016, max=20480, per=19.50%, avg=18748.00, stdev=2449.42, samples=2 00:29:21.495 iops : min= 4254, max= 5120, avg=4687.00, stdev=612.35, samples=2 00:29:21.495 lat (usec) : 1000=0.02% 00:29:21.495 lat (msec) : 4=0.79%, 10=37.76%, 20=44.19%, 50=17.24% 00:29:21.495 cpu : usr=1.69%, sys=4.37%, ctx=449, majf=0, minf=2 00:29:21.495 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:29:21.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.495 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:21.495 issued rwts: total=4608,4814,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.495 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.495 job3: (groupid=0, jobs=1): err= 0: pid=1135856: Wed Nov 6 14:13:00 2024 00:29:21.495 read: IOPS=7125, BW=27.8MiB/s (29.2MB/s)(29.1MiB/1044msec) 00:29:21.495 slat (nsec): min=915, max=8481.8k, avg=69122.97, stdev=543303.69 00:29:21.495 clat (usec): min=2870, max=51718, avg=9203.81, stdev=5615.18 00:29:21.495 lat (usec): min=2873, max=51720, avg=9272.93, stdev=5630.37 00:29:21.495 clat percentiles (usec): 00:29:21.495 | 1.00th=[ 4178], 5.00th=[ 6325], 10.00th=[ 6915], 20.00th=[ 7177], 00:29:21.495 | 30.00th=[ 7373], 40.00th=[ 7635], 50.00th=[ 8029], 60.00th=[ 8455], 00:29:21.495 | 70.00th=[ 8979], 80.00th=[10028], 90.00th=[11731], 95.00th=[13173], 00:29:21.495 | 99.00th=[49021], 99.50th=[49021], 99.90th=[51643], 99.95th=[51643], 00:29:21.495 | 99.99th=[51643] 00:29:21.495 write: IOPS=7356, BW=28.7MiB/s (30.1MB/s)(30.0MiB/1044msec); 0 zone resets 00:29:21.495 slat (nsec): min=1522, max=11571k, avg=60123.50, stdev=440593.70 00:29:21.495 clat (usec): min=1815, max=34019, avg=8248.03, stdev=3183.65 00:29:21.495 lat (usec): min=1819, max=34053, avg=8308.15, stdev=3215.06 00:29:21.495 clat percentiles (usec): 00:29:21.495 | 1.00th=[ 3163], 5.00th=[ 4948], 10.00th=[ 5407], 20.00th=[ 6652], 00:29:21.495 | 30.00th=[ 7177], 40.00th=[ 7570], 50.00th=[ 7767], 60.00th=[ 7963], 00:29:21.495 | 70.00th=[ 8291], 80.00th=[ 8848], 90.00th=[10552], 95.00th=[14222], 00:29:21.495 | 99.00th=[23725], 99.50th=[23725], 99.90th=[23725], 99.95th=[23725], 00:29:21.495 | 99.99th=[33817] 00:29:21.495 bw ( KiB/s): min=28672, max=32768, per=31.96%, avg=30720.00, stdev=2896.31, samples=2 00:29:21.495 iops : min= 7168, max= 8192, avg=7680.00, stdev=724.08, samples=2 00:29:21.495 lat (msec) : 2=0.04%, 4=1.33%, 10=81.47%, 20=15.44%, 50=1.62% 00:29:21.495 lat (msec) : 100=0.11% 00:29:21.495 cpu : usr=3.45%, sys=4.41%, ctx=652, majf=0, minf=1 00:29:21.495 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:29:21.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.495 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:21.495 issued rwts: total=7439,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.495 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.495 00:29:21.495 Run status group 0 (all jobs): 00:29:21.495 READ: bw=89.1MiB/s (93.5MB/s), 15.9MiB/s-29.9MiB/s (16.7MB/s-31.4MB/s), io=93.1MiB (97.6MB), run=1003-1044msec 00:29:21.495 WRITE: bw=93.9MiB/s (98.4MB/s), 16.9MiB/s-32.1MiB/s (17.7MB/s-33.7MB/s), io=98.0MiB (103MB), run=1003-1044msec 00:29:21.495 00:29:21.495 Disk stats (read/write): 00:29:21.495 nvme0n1: ios=6194/7111, merge=0/0, ticks=26849/30783, in_queue=57632, util=91.78% 00:29:21.495 nvme0n2: ios=3623/3817, merge=0/0, ticks=24798/32368, in_queue=57166, util=91.02% 00:29:21.495 nvme0n3: ios=3611/3935, merge=0/0, ticks=42391/60185, in_queue=102576, util=90.72% 00:29:21.495 nvme0n4: ios=6144/6279, merge=0/0, ticks=43422/40649, in_queue=84071, util=89.42% 00:29:21.495 14:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:29:21.495 14:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1136189 00:29:21.495 14:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:29:21.495 14:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:29:21.495 [global] 00:29:21.495 thread=1 00:29:21.495 invalidate=1 00:29:21.495 rw=read 00:29:21.495 time_based=1 00:29:21.495 runtime=10 00:29:21.495 ioengine=libaio 00:29:21.495 direct=1 00:29:21.495 bs=4096 00:29:21.495 iodepth=1 00:29:21.495 norandommap=1 00:29:21.495 numjobs=1 00:29:21.495 00:29:21.495 [job0] 00:29:21.495 filename=/dev/nvme0n1 00:29:21.495 [job1] 00:29:21.495 filename=/dev/nvme0n2 00:29:21.495 [job2] 00:29:21.495 filename=/dev/nvme0n3 00:29:21.495 [job3] 00:29:21.495 filename=/dev/nvme0n4 00:29:21.495 Could not set queue depth (nvme0n1) 00:29:21.495 Could not set queue depth (nvme0n2) 00:29:21.495 Could not set queue depth (nvme0n3) 00:29:21.495 Could not set queue depth (nvme0n4) 00:29:21.755 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:21.755 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:21.755 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:21.755 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:21.755 fio-3.35 00:29:21.755 Starting 4 threads 00:29:24.295 14:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:29:24.554 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=9740288, buflen=4096 00:29:24.554 fio: pid=1136400, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:29:24.554 14:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:29:24.813 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=14675968, buflen=4096 00:29:24.813 fio: pid=1136396, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:29:24.813 14:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:24.813 14:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:29:24.813 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=2510848, buflen=4096 00:29:24.813 fio: pid=1136374, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:29:24.813 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:24.813 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:29:25.072 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:25.072 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:29:25.072 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=1232896, buflen=4096 00:29:25.072 fio: pid=1136380, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:29:25.072 00:29:25.072 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1136374: Wed Nov 6 14:13:04 2024 00:29:25.072 read: IOPS=206, BW=824KiB/s (844kB/s)(2452KiB/2975msec) 00:29:25.072 slat (usec): min=3, max=279, avg=14.21, stdev=13.61 00:29:25.072 clat (usec): min=407, max=41779, avg=4834.09, stdev=12282.72 00:29:25.072 lat (usec): min=418, max=41982, avg=4848.28, stdev=12287.21 00:29:25.072 clat percentiles (usec): 00:29:25.072 | 1.00th=[ 449], 5.00th=[ 519], 10.00th=[ 553], 20.00th=[ 619], 00:29:25.072 | 30.00th=[ 644], 40.00th=[ 676], 50.00th=[ 701], 60.00th=[ 734], 00:29:25.072 | 70.00th=[ 758], 80.00th=[ 783], 90.00th=[40633], 95.00th=[41157], 00:29:25.072 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:29:25.072 | 99.99th=[41681] 00:29:25.072 bw ( KiB/s): min= 144, max= 1736, per=11.03%, avg=961.60, stdev=618.14, samples=5 00:29:25.072 iops : min= 36, max= 434, avg=240.40, stdev=154.53, samples=5 00:29:25.072 lat (usec) : 500=3.09%, 750=64.82%, 1000=21.66% 00:29:25.072 lat (msec) : 50=10.26% 00:29:25.072 cpu : usr=0.03%, sys=0.37%, ctx=616, majf=0, minf=1 00:29:25.072 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:25.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:25.072 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:25.072 issued rwts: total=614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:25.072 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:25.072 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1136380: Wed Nov 6 14:13:04 2024 00:29:25.072 read: IOPS=95, BW=381KiB/s (391kB/s)(1204KiB/3156msec) 00:29:25.072 slat (usec): min=3, max=9693, avg=72.99, stdev=703.41 00:29:25.072 clat (usec): min=451, max=44976, avg=10402.70, stdev=17080.40 00:29:25.072 lat (usec): min=463, max=45001, avg=10450.97, stdev=17076.48 00:29:25.072 clat percentiles (usec): 00:29:25.072 | 1.00th=[ 611], 5.00th=[ 701], 10.00th=[ 742], 20.00th=[ 857], 00:29:25.072 | 30.00th=[ 914], 40.00th=[ 955], 50.00th=[ 988], 60.00th=[ 1029], 00:29:25.072 | 70.00th=[ 1090], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:29:25.072 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:29:25.072 | 99.99th=[44827] 00:29:25.072 bw ( KiB/s): min= 96, max= 1513, per=4.09%, avg=356.17, stdev=568.59, samples=6 00:29:25.072 iops : min= 24, max= 378, avg=89.00, stdev=142.05, samples=6 00:29:25.072 lat (usec) : 500=0.66%, 750=9.93%, 1000=43.05% 00:29:25.073 lat (msec) : 2=22.52%, 50=23.51% 00:29:25.073 cpu : usr=0.03%, sys=0.19%, ctx=305, majf=0, minf=2 00:29:25.073 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:25.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:25.073 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:25.073 issued rwts: total=302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:25.073 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:25.073 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1136396: Wed Nov 6 14:13:04 2024 00:29:25.073 read: IOPS=1276, BW=5106KiB/s (5228kB/s)(14.0MiB/2807msec) 00:29:25.073 slat (usec): min=2, max=15156, avg=19.26, stdev=276.78 00:29:25.073 clat (usec): min=173, max=41533, avg=760.84, stdev=1362.20 00:29:25.073 lat (usec): min=176, max=41537, avg=780.10, stdev=1390.20 00:29:25.073 clat percentiles (usec): 00:29:25.073 | 1.00th=[ 330], 5.00th=[ 478], 10.00th=[ 537], 20.00th=[ 594], 00:29:25.073 | 30.00th=[ 644], 40.00th=[ 676], 50.00th=[ 709], 60.00th=[ 750], 00:29:25.073 | 70.00th=[ 783], 80.00th=[ 832], 90.00th=[ 922], 95.00th=[ 988], 00:29:25.073 | 99.00th=[ 1090], 99.50th=[ 1123], 99.90th=[41157], 99.95th=[41681], 00:29:25.073 | 99.99th=[41681] 00:29:25.073 bw ( KiB/s): min= 3440, max= 5912, per=58.33%, avg=5083.20, stdev=996.13, samples=5 00:29:25.073 iops : min= 860, max= 1478, avg=1270.80, stdev=249.03, samples=5 00:29:25.073 lat (usec) : 250=0.22%, 500=6.19%, 750=53.71%, 1000=35.71% 00:29:25.073 lat (msec) : 2=4.02%, 50=0.11% 00:29:25.073 cpu : usr=0.68%, sys=3.06%, ctx=3587, majf=0, minf=2 00:29:25.073 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:25.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:25.073 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:25.073 issued rwts: total=3584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:25.073 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:25.073 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1136400: Wed Nov 6 14:13:04 2024 00:29:25.073 read: IOPS=900, BW=3599KiB/s (3685kB/s)(9512KiB/2643msec) 00:29:25.073 slat (nsec): min=2946, max=46779, avg=16459.94, stdev=5104.88 00:29:25.073 clat (usec): min=151, max=41566, avg=1090.85, stdev=2478.21 00:29:25.073 lat (usec): min=154, max=41579, avg=1107.31, stdev=2478.20 00:29:25.073 clat percentiles (usec): 00:29:25.073 | 1.00th=[ 519], 5.00th=[ 709], 10.00th=[ 807], 20.00th=[ 881], 00:29:25.073 | 30.00th=[ 922], 40.00th=[ 938], 50.00th=[ 955], 60.00th=[ 971], 00:29:25.073 | 70.00th=[ 996], 80.00th=[ 1012], 90.00th=[ 1045], 95.00th=[ 1074], 00:29:25.073 | 99.00th=[ 1418], 99.50th=[ 2114], 99.90th=[41157], 99.95th=[41681], 00:29:25.073 | 99.99th=[41681] 00:29:25.073 bw ( KiB/s): min= 3080, max= 4288, per=41.04%, avg=3576.00, stdev=466.06, samples=5 00:29:25.073 iops : min= 770, max= 1072, avg=894.00, stdev=116.52, samples=5 00:29:25.073 lat (usec) : 250=0.04%, 500=0.76%, 750=5.76%, 1000=68.05% 00:29:25.073 lat (msec) : 2=24.84%, 4=0.13%, 50=0.38% 00:29:25.073 cpu : usr=1.02%, sys=2.65%, ctx=2379, majf=0, minf=2 00:29:25.073 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:25.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:25.073 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:25.073 issued rwts: total=2379,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:25.073 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:25.073 00:29:25.073 Run status group 0 (all jobs): 00:29:25.073 READ: bw=8714KiB/s (8923kB/s), 381KiB/s-5106KiB/s (391kB/s-5228kB/s), io=26.9MiB (28.2MB), run=2643-3156msec 00:29:25.073 00:29:25.073 Disk stats (read/write): 00:29:25.073 nvme0n1: ios=609/0, merge=0/0, ticks=2796/0, in_queue=2796, util=94.73% 00:29:25.073 nvme0n2: ios=299/0, merge=0/0, ticks=3042/0, in_queue=3042, util=95.35% 00:29:25.073 nvme0n3: ios=3302/0, merge=0/0, ticks=2290/0, in_queue=2290, util=95.99% 00:29:25.073 nvme0n4: ios=2320/0, merge=0/0, ticks=2392/0, in_queue=2392, util=96.39% 00:29:25.333 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:25.333 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:29:25.333 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:25.333 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:29:25.592 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:25.592 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:29:25.592 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:25.592 14:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:29:25.851 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:29:25.851 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1136189 00:29:25.851 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:29:25.851 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:25.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:25.851 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:25.851 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:29:25.851 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:29:25.851 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:25.851 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:29:25.851 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:25.851 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:29:25.851 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:29:25.851 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:29:25.851 nvmf hotplug test: fio failed as expected 00:29:25.851 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:26.110 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:29:26.110 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:29:26.110 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:29:26.110 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:29:26.110 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:29:26.110 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:26.110 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:29:26.110 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:26.110 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:29:26.110 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:26.110 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:26.110 rmmod nvme_tcp 00:29:26.110 rmmod nvme_fabrics 00:29:26.110 rmmod nvme_keyring 00:29:26.110 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:26.110 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:29:26.110 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:29:26.110 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1132696 ']' 00:29:26.110 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1132696 00:29:26.110 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 1132696 ']' 00:29:26.110 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 1132696 00:29:26.110 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:29:26.110 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:26.110 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1132696 00:29:26.110 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:26.110 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:26.110 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1132696' 00:29:26.110 killing process with pid 1132696 00:29:26.110 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 1132696 00:29:26.110 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 1132696 00:29:26.369 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:26.369 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:26.369 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:26.369 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:29:26.369 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:26.369 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:29:26.369 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:29:26.369 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:26.369 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:26.369 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.369 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:26.369 14:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.271 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:28.271 00:29:28.271 real 0m24.737s 00:29:28.271 user 2m5.971s 00:29:28.271 sys 0m9.396s 00:29:28.271 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:28.271 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:28.271 ************************************ 00:29:28.271 END TEST nvmf_fio_target 00:29:28.271 ************************************ 00:29:28.271 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:29:28.271 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:28.271 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:28.271 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:28.531 ************************************ 00:29:28.531 START TEST nvmf_bdevio 00:29:28.531 ************************************ 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:29:28.531 * Looking for test storage... 00:29:28.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:28.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.531 --rc genhtml_branch_coverage=1 00:29:28.531 --rc genhtml_function_coverage=1 00:29:28.531 --rc genhtml_legend=1 00:29:28.531 --rc geninfo_all_blocks=1 00:29:28.531 --rc geninfo_unexecuted_blocks=1 00:29:28.531 00:29:28.531 ' 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:28.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.531 --rc genhtml_branch_coverage=1 00:29:28.531 --rc genhtml_function_coverage=1 00:29:28.531 --rc genhtml_legend=1 00:29:28.531 --rc geninfo_all_blocks=1 00:29:28.531 --rc geninfo_unexecuted_blocks=1 00:29:28.531 00:29:28.531 ' 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:28.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.531 --rc genhtml_branch_coverage=1 00:29:28.531 --rc genhtml_function_coverage=1 00:29:28.531 --rc genhtml_legend=1 00:29:28.531 --rc geninfo_all_blocks=1 00:29:28.531 --rc geninfo_unexecuted_blocks=1 00:29:28.531 00:29:28.531 ' 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:28.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.531 --rc genhtml_branch_coverage=1 00:29:28.531 --rc genhtml_function_coverage=1 00:29:28.531 --rc genhtml_legend=1 00:29:28.531 --rc geninfo_all_blocks=1 00:29:28.531 --rc geninfo_unexecuted_blocks=1 00:29:28.531 00:29:28.531 ' 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:28.531 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:28.532 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.532 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.532 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.532 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:29:28.532 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.532 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:29:28.532 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:28.532 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:28.532 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:28.532 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:28.532 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:28.532 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:28.532 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:28.532 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:28.532 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:28.532 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:28.532 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:28.532 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:28.532 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:29:28.532 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:28.532 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:28.532 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:28.532 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:28.532 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:28.532 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.532 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:28.532 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.532 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:28.532 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:28.532 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:29:28.532 14:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:33.807 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:33.807 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:29:33.807 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:33.807 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:33.807 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:33.807 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:33.807 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:33.807 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:29:33.807 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:33.807 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:29:33.807 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:29:33.807 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:29:33.807 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:29:33.807 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:29:33.807 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:29:33.807 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:33.808 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:33.808 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:33.808 Found net devices under 0000:31:00.0: cvl_0_0 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:33.808 Found net devices under 0000:31:00.1: cvl_0_1 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:33.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:33.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.549 ms 00:29:33.808 00:29:33.808 --- 10.0.0.2 ping statistics --- 00:29:33.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.808 rtt min/avg/max/mdev = 0.549/0.549/0.549/0.000 ms 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:33.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:33.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:29:33.808 00:29:33.808 --- 10.0.0.1 ping statistics --- 00:29:33.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.808 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:33.808 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:33.809 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:33.809 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:29:33.809 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:33.809 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:33.809 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:33.809 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1141729 00:29:33.809 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1141729 00:29:33.809 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 1141729 ']' 00:29:33.809 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.809 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:33.809 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.809 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:33.809 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:33.809 14:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:29:33.809 [2024-11-06 14:13:12.900950] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:33.809 [2024-11-06 14:13:12.901929] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:29:33.809 [2024-11-06 14:13:12.901965] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:33.809 [2024-11-06 14:13:12.972416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:33.809 [2024-11-06 14:13:13.001090] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:33.809 [2024-11-06 14:13:13.001115] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:33.809 [2024-11-06 14:13:13.001124] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:33.809 [2024-11-06 14:13:13.001129] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:33.809 [2024-11-06 14:13:13.001133] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:33.809 [2024-11-06 14:13:13.002349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:33.809 [2024-11-06 14:13:13.002585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:33.809 [2024-11-06 14:13:13.002737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:33.809 [2024-11-06 14:13:13.002738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:33.809 [2024-11-06 14:13:13.052576] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:33.809 [2024-11-06 14:13:13.053647] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:33.809 [2024-11-06 14:13:13.054189] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:33.809 [2024-11-06 14:13:13.054348] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:33.809 [2024-11-06 14:13:13.054354] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:33.809 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:33.809 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:29:33.809 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:33.809 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:33.809 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:34.069 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:34.069 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:34.069 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.069 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:34.069 [2024-11-06 14:13:13.103436] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.069 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.069 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:34.069 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.069 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:34.069 Malloc0 00:29:34.069 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.069 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:34.069 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.069 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:34.069 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.070 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:34.070 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.070 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:34.070 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.070 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:34.070 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.070 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:34.070 [2024-11-06 14:13:13.163228] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:34.070 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.070 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:29:34.070 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:29:34.070 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:29:34.070 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:29:34.070 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:34.070 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:34.070 { 00:29:34.070 "params": { 00:29:34.070 "name": "Nvme$subsystem", 00:29:34.070 "trtype": "$TEST_TRANSPORT", 00:29:34.070 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.070 "adrfam": "ipv4", 00:29:34.070 "trsvcid": "$NVMF_PORT", 00:29:34.070 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.070 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.070 "hdgst": ${hdgst:-false}, 00:29:34.070 "ddgst": ${ddgst:-false} 00:29:34.070 }, 00:29:34.070 "method": "bdev_nvme_attach_controller" 00:29:34.070 } 00:29:34.070 EOF 00:29:34.070 )") 00:29:34.070 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:29:34.070 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:29:34.070 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:29:34.070 14:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:34.070 "params": { 00:29:34.070 "name": "Nvme1", 00:29:34.070 "trtype": "tcp", 00:29:34.070 "traddr": "10.0.0.2", 00:29:34.070 "adrfam": "ipv4", 00:29:34.070 "trsvcid": "4420", 00:29:34.070 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:34.070 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:34.070 "hdgst": false, 00:29:34.070 "ddgst": false 00:29:34.070 }, 00:29:34.070 "method": "bdev_nvme_attach_controller" 00:29:34.070 }' 00:29:34.070 [2024-11-06 14:13:13.200505] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:29:34.070 [2024-11-06 14:13:13.200558] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1141753 ] 00:29:34.070 [2024-11-06 14:13:13.279314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:34.070 [2024-11-06 14:13:13.318289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.070 [2024-11-06 14:13:13.318364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.070 [2024-11-06 14:13:13.318364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:34.329 I/O targets: 00:29:34.329 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:29:34.329 00:29:34.329 00:29:34.329 CUnit - A unit testing framework for C - Version 2.1-3 00:29:34.329 http://cunit.sourceforge.net/ 00:29:34.329 00:29:34.329 00:29:34.329 Suite: bdevio tests on: Nvme1n1 00:29:34.329 Test: blockdev write read block ...passed 00:29:34.329 Test: blockdev write zeroes read block ...passed 00:29:34.329 Test: blockdev write zeroes read no split ...passed 00:29:34.329 Test: blockdev write zeroes read split ...passed 00:29:34.329 Test: blockdev write zeroes read split partial ...passed 00:29:34.329 Test: blockdev reset ...[2024-11-06 14:13:13.612153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:34.329 [2024-11-06 14:13:13.612215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24734b0 (9): Bad file descriptor 00:29:34.588 [2024-11-06 14:13:13.659698] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:29:34.588 passed 00:29:34.588 Test: blockdev write read 8 blocks ...passed 00:29:34.588 Test: blockdev write read size > 128k ...passed 00:29:34.588 Test: blockdev write read invalid size ...passed 00:29:34.588 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:34.588 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:34.588 Test: blockdev write read max offset ...passed 00:29:34.588 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:34.588 Test: blockdev writev readv 8 blocks ...passed 00:29:34.588 Test: blockdev writev readv 30 x 1block ...passed 00:29:34.588 Test: blockdev writev readv block ...passed 00:29:34.849 Test: blockdev writev readv size > 128k ...passed 00:29:34.849 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:34.849 Test: blockdev comparev and writev ...[2024-11-06 14:13:13.878371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:34.849 [2024-11-06 14:13:13.878396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.849 [2024-11-06 14:13:13.878408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:34.849 [2024-11-06 14:13:13.878414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.849 [2024-11-06 14:13:13.878807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:34.849 [2024-11-06 14:13:13.878814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:34.849 [2024-11-06 14:13:13.878824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:34.849 [2024-11-06 14:13:13.878829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:34.849 [2024-11-06 14:13:13.879233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:34.849 [2024-11-06 14:13:13.879241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:34.849 [2024-11-06 14:13:13.879254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:34.849 [2024-11-06 14:13:13.879260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:34.849 [2024-11-06 14:13:13.879664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:34.849 [2024-11-06 14:13:13.879672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:34.849 [2024-11-06 14:13:13.879681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:34.849 [2024-11-06 14:13:13.879687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:34.849 passed 00:29:34.849 Test: blockdev nvme passthru rw ...passed 00:29:34.849 Test: blockdev nvme passthru vendor specific ...[2024-11-06 14:13:13.962773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:34.849 [2024-11-06 14:13:13.962784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:34.849 [2024-11-06 14:13:13.963036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:34.849 [2024-11-06 14:13:13.963043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:34.849 [2024-11-06 14:13:13.963262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:34.849 [2024-11-06 14:13:13.963270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:34.849 [2024-11-06 14:13:13.963497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:34.849 [2024-11-06 14:13:13.963504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:34.849 passed 00:29:34.849 Test: blockdev nvme admin passthru ...passed 00:29:34.849 Test: blockdev copy ...passed 00:29:34.849 00:29:34.849 Run Summary: Type Total Ran Passed Failed Inactive 00:29:34.849 suites 1 1 n/a 0 0 00:29:34.849 tests 23 23 23 0 0 00:29:34.849 asserts 152 152 152 0 n/a 00:29:34.849 00:29:34.849 Elapsed time = 1.135 seconds 00:29:34.849 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:34.849 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.849 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:35.109 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.109 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:29:35.109 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:29:35.109 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:35.109 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:29:35.109 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:35.109 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:29:35.109 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:35.109 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:35.109 rmmod nvme_tcp 00:29:35.109 rmmod nvme_fabrics 00:29:35.109 rmmod nvme_keyring 00:29:35.109 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:35.109 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:29:35.109 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:29:35.109 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1141729 ']' 00:29:35.109 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1141729 00:29:35.109 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 1141729 ']' 00:29:35.109 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 1141729 00:29:35.109 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:29:35.109 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:35.109 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1141729 00:29:35.109 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:29:35.109 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:29:35.109 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1141729' 00:29:35.109 killing process with pid 1141729 00:29:35.109 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 1141729 00:29:35.109 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 1141729 00:29:35.109 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:35.109 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:35.109 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:35.109 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:29:35.109 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:29:35.109 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:35.109 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:29:35.109 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:35.109 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:35.109 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:35.109 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:35.109 14:13:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:37.645 14:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:37.645 00:29:37.645 real 0m8.859s 00:29:37.645 user 0m7.614s 00:29:37.645 sys 0m4.478s 00:29:37.645 14:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:37.645 14:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:37.645 ************************************ 00:29:37.645 END TEST nvmf_bdevio 00:29:37.645 ************************************ 00:29:37.645 14:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:29:37.645 00:29:37.645 real 4m23.303s 00:29:37.645 user 9m42.570s 00:29:37.645 sys 1m36.756s 00:29:37.645 14:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:37.645 14:13:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:37.645 ************************************ 00:29:37.645 END TEST nvmf_target_core_interrupt_mode 00:29:37.645 ************************************ 00:29:37.645 14:13:16 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:29:37.645 14:13:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:37.645 14:13:16 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:37.645 14:13:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:37.645 ************************************ 00:29:37.645 START TEST nvmf_interrupt 00:29:37.645 ************************************ 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:29:37.645 * Looking for test storage... 00:29:37.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:37.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.645 --rc genhtml_branch_coverage=1 00:29:37.645 --rc genhtml_function_coverage=1 00:29:37.645 --rc genhtml_legend=1 00:29:37.645 --rc geninfo_all_blocks=1 00:29:37.645 --rc geninfo_unexecuted_blocks=1 00:29:37.645 00:29:37.645 ' 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:37.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.645 --rc genhtml_branch_coverage=1 00:29:37.645 --rc genhtml_function_coverage=1 00:29:37.645 --rc genhtml_legend=1 00:29:37.645 --rc geninfo_all_blocks=1 00:29:37.645 --rc geninfo_unexecuted_blocks=1 00:29:37.645 00:29:37.645 ' 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:37.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.645 --rc genhtml_branch_coverage=1 00:29:37.645 --rc genhtml_function_coverage=1 00:29:37.645 --rc genhtml_legend=1 00:29:37.645 --rc geninfo_all_blocks=1 00:29:37.645 --rc geninfo_unexecuted_blocks=1 00:29:37.645 00:29:37.645 ' 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:37.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.645 --rc genhtml_branch_coverage=1 00:29:37.645 --rc genhtml_function_coverage=1 00:29:37.645 --rc genhtml_legend=1 00:29:37.645 --rc geninfo_all_blocks=1 00:29:37.645 --rc geninfo_unexecuted_blocks=1 00:29:37.645 00:29:37.645 ' 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:29:37.645 14:13:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:37.646 14:13:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:37.646 14:13:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:37.646 14:13:16 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.646 14:13:16 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.646 14:13:16 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.646 14:13:16 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:29:37.646 14:13:16 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.646 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:29:37.646 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:37.646 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:37.646 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:37.646 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:37.646 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:37.646 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:37.646 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:37.646 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:37.646 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:37.646 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:37.646 14:13:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:29:37.646 14:13:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:29:37.646 14:13:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:29:37.646 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:37.646 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:37.646 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:37.646 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:37.646 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:37.646 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.646 14:13:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:37.646 14:13:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:37.646 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:37.646 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:37.646 14:13:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:29:37.646 14:13:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:42.920 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:42.920 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:29:42.920 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:42.920 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:42.920 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:42.920 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:42.920 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:42.920 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:42.921 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:42.921 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:42.921 Found net devices under 0000:31:00.0: cvl_0_0 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:42.921 Found net devices under 0000:31:00.1: cvl_0_1 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:42.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:42.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:29:42.921 00:29:42.921 --- 10.0.0.2 ping statistics --- 00:29:42.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.921 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:42.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:42.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:29:42.921 00:29:42.921 --- 10.0.0.1 ping statistics --- 00:29:42.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.921 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1146435 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1146435 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@833 -- # '[' -z 1146435 ']' 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:42.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:42.921 14:13:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:42.922 [2024-11-06 14:13:21.961347] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:42.922 [2024-11-06 14:13:21.962324] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:29:42.922 [2024-11-06 14:13:21.962361] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:42.922 [2024-11-06 14:13:22.045699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:42.922 [2024-11-06 14:13:22.081554] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:42.922 [2024-11-06 14:13:22.081586] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:42.922 [2024-11-06 14:13:22.081595] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:42.922 [2024-11-06 14:13:22.081602] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:42.922 [2024-11-06 14:13:22.081608] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:42.922 [2024-11-06 14:13:22.082748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:42.922 [2024-11-06 14:13:22.082753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.922 [2024-11-06 14:13:22.138985] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:42.922 [2024-11-06 14:13:22.139499] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:42.922 [2024-11-06 14:13:22.139845] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:43.491 14:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:43.491 14:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@866 -- # return 0 00:29:43.491 14:13:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:43.491 14:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:43.491 14:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:43.491 14:13:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:43.491 14:13:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:29:43.491 14:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:29:43.491 14:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:29:43.491 14:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:29:43.491 5000+0 records in 00:29:43.491 5000+0 records out 00:29:43.491 10240000 bytes (10 MB, 9.8 MiB) copied, 0.00792586 s, 1.3 GB/s 00:29:43.491 14:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:29:43.491 14:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.750 14:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:43.750 AIO0 00:29:43.750 14:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.750 14:13:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:29:43.750 14:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.750 14:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:43.750 [2024-11-06 14:13:22.819307] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:43.750 14:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.750 14:13:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:29:43.750 14:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.750 14:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:43.750 14:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.750 14:13:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:29:43.750 14:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.751 14:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:43.751 14:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.751 14:13:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:43.751 14:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.751 14:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:43.751 [2024-11-06 14:13:22.843766] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:43.751 14:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.751 14:13:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:29:43.751 14:13:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1146435 0 00:29:43.751 14:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1146435 0 idle 00:29:43.751 14:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1146435 00:29:43.751 14:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:29:43.751 14:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:43.751 14:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:29:43.751 14:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:43.751 14:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:29:43.751 14:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:29:43.751 14:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:43.751 14:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:43.751 14:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:43.751 14:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:29:43.751 14:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1146435 -w 256 00:29:43.751 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1146435 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.22 reactor_0' 00:29:43.751 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1146435 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.22 reactor_0 00:29:43.751 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:43.751 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:43.751 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:29:43.751 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:29:43.751 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:29:43.751 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:29:43.751 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:29:43.751 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:43.751 14:13:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:29:43.751 14:13:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1146435 1 00:29:43.751 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1146435 1 idle 00:29:43.751 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1146435 00:29:43.751 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:29:43.751 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:43.751 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:29:43.751 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:43.751 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:29:43.751 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:29:43.751 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:43.751 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:43.751 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:43.751 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:29:43.751 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1146435 -w 256 00:29:44.011 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1146440 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:29:44.011 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1146440 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:29:44.011 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:44.011 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:44.011 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:29:44.011 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:29:44.011 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:29:44.011 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:29:44.011 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:29:44.011 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:44.011 14:13:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:29:44.011 14:13:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1146718 00:29:44.011 14:13:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:29:44.011 14:13:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:29:44.011 14:13:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1146435 0 00:29:44.011 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1146435 0 busy 00:29:44.011 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1146435 00:29:44.011 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:29:44.011 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:29:44.011 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:29:44.011 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:44.011 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:29:44.011 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:44.011 14:13:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:44.011 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:44.011 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:44.011 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1146435 -w 256 00:29:44.011 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:29:44.271 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1146435 root 20 0 128.2g 44928 32256 R 26.7 0.0 0:00.27 reactor_0' 00:29:44.271 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1146435 root 20 0 128.2g 44928 32256 R 26.7 0.0 0:00.27 reactor_0 00:29:44.271 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:44.271 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:44.271 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=26.7 00:29:44.271 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=26 00:29:44.271 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:29:44.271 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:29:44.271 14:13:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:29:45.210 14:13:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:29:45.210 14:13:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:45.210 14:13:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1146435 -w 256 00:29:45.210 14:13:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:29:45.470 14:13:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1146435 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.62 reactor_0' 00:29:45.470 14:13:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1146435 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.62 reactor_0 00:29:45.470 14:13:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:45.470 14:13:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:45.470 14:13:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:29:45.470 14:13:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:29:45.470 14:13:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:29:45.470 14:13:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:29:45.470 14:13:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:29:45.470 14:13:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:45.470 14:13:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:29:45.470 14:13:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:29:45.470 14:13:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1146435 1 00:29:45.470 14:13:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1146435 1 busy 00:29:45.470 14:13:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1146435 00:29:45.470 14:13:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:29:45.470 14:13:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:29:45.470 14:13:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:29:45.470 14:13:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:45.470 14:13:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:29:45.470 14:13:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:45.470 14:13:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:45.470 14:13:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:45.470 14:13:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1146435 -w 256 00:29:45.470 14:13:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:29:45.470 14:13:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1146440 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:01.36 reactor_1' 00:29:45.470 14:13:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:45.470 14:13:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1146440 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:01.36 reactor_1 00:29:45.470 14:13:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:45.470 14:13:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:29:45.470 14:13:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:29:45.470 14:13:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:29:45.470 14:13:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:29:45.470 14:13:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:29:45.470 14:13:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:45.470 14:13:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1146718 00:29:55.458 Initializing NVMe Controllers 00:29:55.458 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:55.458 Controller IO queue size 256, less than required. 00:29:55.458 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:55.458 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:55.458 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:55.458 Initialization complete. Launching workers. 00:29:55.458 ======================================================== 00:29:55.458 Latency(us) 00:29:55.458 Device Information : IOPS MiB/s Average min max 00:29:55.458 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 20479.24 80.00 12504.83 3404.98 18599.73 00:29:55.458 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 21540.72 84.14 11888.54 3330.58 18619.35 00:29:55.458 ======================================================== 00:29:55.458 Total : 42019.96 164.14 12188.90 3330.58 18619.35 00:29:55.458 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1146435 0 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1146435 0 idle 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1146435 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1146435 -w 256 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1146435 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.22 reactor_0' 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1146435 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.22 reactor_0 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1146435 1 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1146435 1 idle 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1146435 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1146435 -w 256 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1146440 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1' 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1146440 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:55.458 14:13:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:29:55.458 14:13:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:29:55.458 14:13:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # local i=0 00:29:55.458 14:13:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:29:55.458 14:13:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:29:55.458 14:13:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # sleep 2 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # return 0 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1146435 0 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1146435 0 idle 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1146435 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1146435 -w 256 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1146435 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.38 reactor_0' 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1146435 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.38 reactor_0 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1146435 1 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1146435 1 idle 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1146435 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1146435 -w 256 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1146440 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.06 reactor_1' 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1146440 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.06 reactor_1 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:57.365 14:13:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:57.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:57.625 14:13:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:57.625 14:13:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1221 -- # local i=0 00:29:57.625 14:13:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:29:57.625 14:13:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:57.625 14:13:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:57.625 14:13:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:29:57.625 14:13:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1233 -- # return 0 00:29:57.625 14:13:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:29:57.625 14:13:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:29:57.625 14:13:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:57.625 14:13:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:29:57.625 14:13:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:57.625 14:13:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:29:57.625 14:13:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:57.625 14:13:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:57.625 rmmod nvme_tcp 00:29:57.625 rmmod nvme_fabrics 00:29:57.625 rmmod nvme_keyring 00:29:57.625 14:13:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:57.625 14:13:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:29:57.625 14:13:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:29:57.625 14:13:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1146435 ']' 00:29:57.625 14:13:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1146435 00:29:57.625 14:13:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@952 -- # '[' -z 1146435 ']' 00:29:57.625 14:13:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # kill -0 1146435 00:29:57.625 14:13:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # uname 00:29:57.625 14:13:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:57.625 14:13:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1146435 00:29:57.625 14:13:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:57.625 14:13:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:57.625 14:13:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1146435' 00:29:57.625 killing process with pid 1146435 00:29:57.625 14:13:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # kill 1146435 00:29:57.625 14:13:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@976 -- # wait 1146435 00:29:57.885 14:13:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:57.885 14:13:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:57.885 14:13:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:57.885 14:13:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:29:57.885 14:13:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:29:57.885 14:13:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:57.885 14:13:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:29:57.885 14:13:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:57.885 14:13:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:57.885 14:13:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.885 14:13:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:57.885 14:13:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.790 14:13:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:59.790 00:29:59.790 real 0m22.456s 00:29:59.790 user 0m39.471s 00:29:59.790 sys 0m7.243s 00:29:59.790 14:13:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:59.790 14:13:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:59.790 ************************************ 00:29:59.790 END TEST nvmf_interrupt 00:29:59.790 ************************************ 00:29:59.790 00:29:59.790 real 26m18.599s 00:29:59.790 user 57m13.348s 00:29:59.790 sys 7m44.823s 00:29:59.790 14:13:38 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:59.790 14:13:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:59.790 ************************************ 00:29:59.790 END TEST nvmf_tcp 00:29:59.790 ************************************ 00:29:59.790 14:13:39 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:29:59.790 14:13:39 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:59.790 14:13:39 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:59.790 14:13:39 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:59.790 14:13:39 -- common/autotest_common.sh@10 -- # set +x 00:29:59.790 ************************************ 00:29:59.790 START TEST spdkcli_nvmf_tcp 00:29:59.790 ************************************ 00:29:59.790 14:13:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:00.051 * Looking for test storage... 00:30:00.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:00.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.051 --rc genhtml_branch_coverage=1 00:30:00.051 --rc genhtml_function_coverage=1 00:30:00.051 --rc genhtml_legend=1 00:30:00.051 --rc geninfo_all_blocks=1 00:30:00.051 --rc geninfo_unexecuted_blocks=1 00:30:00.051 00:30:00.051 ' 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:00.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.051 --rc genhtml_branch_coverage=1 00:30:00.051 --rc genhtml_function_coverage=1 00:30:00.051 --rc genhtml_legend=1 00:30:00.051 --rc geninfo_all_blocks=1 00:30:00.051 --rc geninfo_unexecuted_blocks=1 00:30:00.051 00:30:00.051 ' 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:00.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.051 --rc genhtml_branch_coverage=1 00:30:00.051 --rc genhtml_function_coverage=1 00:30:00.051 --rc genhtml_legend=1 00:30:00.051 --rc geninfo_all_blocks=1 00:30:00.051 --rc geninfo_unexecuted_blocks=1 00:30:00.051 00:30:00.051 ' 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:00.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.051 --rc genhtml_branch_coverage=1 00:30:00.051 --rc genhtml_function_coverage=1 00:30:00.051 --rc genhtml_legend=1 00:30:00.051 --rc geninfo_all_blocks=1 00:30:00.051 --rc geninfo_unexecuted_blocks=1 00:30:00.051 00:30:00.051 ' 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:30:00.051 14:13:39 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.052 14:13:39 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:30:00.052 14:13:39 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:00.052 14:13:39 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:00.052 14:13:39 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:00.052 14:13:39 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:00.052 14:13:39 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:00.052 14:13:39 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:00.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:00.052 14:13:39 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:00.052 14:13:39 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:00.052 14:13:39 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:00.052 14:13:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:00.052 14:13:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:00.052 14:13:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:00.052 14:13:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:00.052 14:13:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:00.052 14:13:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:00.052 14:13:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:00.052 14:13:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1150298 00:30:00.052 14:13:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1150298 00:30:00.052 14:13:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # '[' -z 1150298 ']' 00:30:00.052 14:13:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:00.052 14:13:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:00.052 14:13:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:00.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:00.052 14:13:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:00.052 14:13:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:00.052 14:13:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:00.052 [2024-11-06 14:13:39.204956] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:30:00.052 [2024-11-06 14:13:39.205015] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1150298 ] 00:30:00.052 [2024-11-06 14:13:39.274449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:00.052 [2024-11-06 14:13:39.312299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:00.052 [2024-11-06 14:13:39.312315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.312 14:13:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:00.312 14:13:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@866 -- # return 0 00:30:00.312 14:13:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:00.312 14:13:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:00.312 14:13:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:00.312 14:13:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:00.312 14:13:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:00.312 14:13:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:00.312 14:13:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:00.312 14:13:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:00.312 14:13:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:00.312 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:00.312 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:00.312 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:00.312 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:00.312 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:00.312 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:00.312 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:00.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:00.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:00.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:00.312 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:00.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:00.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:00.312 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:00.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:00.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:00.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:00.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:00.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:00.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:00.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:00.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:00.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:00.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:00.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:00.312 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:00.312 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:00.312 ' 00:30:02.847 [2024-11-06 14:13:41.872164] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:04.226 [2024-11-06 14:13:43.091913] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:06.135 [2024-11-06 14:13:45.394380] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:08.672 [2024-11-06 14:13:47.367998] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:09.611 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:09.611 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:09.611 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:09.611 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:09.611 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:09.611 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:09.611 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:09.611 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:09.611 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:09.611 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:09.611 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:09.611 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:09.611 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:09.611 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:09.611 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:09.611 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:09.611 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:09.611 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:09.611 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:09.611 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:09.611 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:09.611 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:09.611 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:09.611 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:09.611 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:09.611 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:09.611 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:09.611 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:09.870 14:13:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:09.870 14:13:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:09.870 14:13:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:09.870 14:13:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:09.870 14:13:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:09.870 14:13:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:09.870 14:13:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:30:09.870 14:13:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:10.129 14:13:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:10.129 14:13:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:10.389 14:13:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:10.389 14:13:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:10.389 14:13:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:10.389 14:13:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:10.389 14:13:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:10.389 14:13:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:10.389 14:13:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:10.389 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:10.389 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:10.389 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:10.389 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:10.389 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:10.389 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:10.389 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:10.389 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:10.389 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:10.389 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:10.389 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:10.389 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:10.389 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:10.389 ' 00:30:15.664 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:15.664 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:15.664 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:15.664 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:15.664 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:15.664 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:15.664 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:15.664 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:15.664 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:15.664 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:15.664 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:15.664 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:15.664 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:15.664 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:15.664 14:13:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:15.664 14:13:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:15.664 14:13:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:15.664 14:13:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1150298 00:30:15.664 14:13:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 1150298 ']' 00:30:15.664 14:13:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 1150298 00:30:15.664 14:13:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # uname 00:30:15.665 14:13:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:15.665 14:13:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1150298 00:30:15.665 14:13:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:15.665 14:13:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:15.665 14:13:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1150298' 00:30:15.665 killing process with pid 1150298 00:30:15.665 14:13:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # kill 1150298 00:30:15.665 14:13:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # wait 1150298 00:30:15.665 14:13:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:15.665 14:13:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:15.665 14:13:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1150298 ']' 00:30:15.665 14:13:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1150298 00:30:15.665 14:13:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 1150298 ']' 00:30:15.665 14:13:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 1150298 00:30:15.665 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1150298) - No such process 00:30:15.665 14:13:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@979 -- # echo 'Process with pid 1150298 is not found' 00:30:15.665 Process with pid 1150298 is not found 00:30:15.665 14:13:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:15.665 14:13:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:15.665 14:13:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:15.665 00:30:15.665 real 0m15.694s 00:30:15.665 user 0m33.453s 00:30:15.665 sys 0m0.630s 00:30:15.665 14:13:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:15.665 14:13:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:15.665 ************************************ 00:30:15.665 END TEST spdkcli_nvmf_tcp 00:30:15.665 ************************************ 00:30:15.665 14:13:54 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:15.665 14:13:54 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:15.665 14:13:54 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:15.665 14:13:54 -- common/autotest_common.sh@10 -- # set +x 00:30:15.665 ************************************ 00:30:15.665 START TEST nvmf_identify_passthru 00:30:15.665 ************************************ 00:30:15.665 14:13:54 nvmf_identify_passthru -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:15.665 * Looking for test storage... 00:30:15.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:15.665 14:13:54 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:15.665 14:13:54 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:30:15.665 14:13:54 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:15.665 14:13:54 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:15.665 14:13:54 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:15.665 14:13:54 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:15.665 14:13:54 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:15.665 14:13:54 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:30:15.665 14:13:54 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:30:15.665 14:13:54 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:30:15.665 14:13:54 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:30:15.665 14:13:54 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:30:15.665 14:13:54 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:30:15.665 14:13:54 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:30:15.665 14:13:54 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:15.665 14:13:54 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:30:15.665 14:13:54 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:30:15.665 14:13:54 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:15.665 14:13:54 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:15.665 14:13:54 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:30:15.665 14:13:54 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:30:15.665 14:13:54 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:15.665 14:13:54 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:30:15.665 14:13:54 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:30:15.665 14:13:54 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:30:15.665 14:13:54 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:30:15.665 14:13:54 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:15.665 14:13:54 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:30:15.665 14:13:54 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:30:15.665 14:13:54 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:15.665 14:13:54 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:15.665 14:13:54 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:30:15.665 14:13:54 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:15.665 14:13:54 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:15.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:15.665 --rc genhtml_branch_coverage=1 00:30:15.665 --rc genhtml_function_coverage=1 00:30:15.665 --rc genhtml_legend=1 00:30:15.665 --rc geninfo_all_blocks=1 00:30:15.665 --rc geninfo_unexecuted_blocks=1 00:30:15.665 00:30:15.665 ' 00:30:15.665 14:13:54 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:15.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:15.665 --rc genhtml_branch_coverage=1 00:30:15.665 --rc genhtml_function_coverage=1 00:30:15.665 --rc genhtml_legend=1 00:30:15.665 --rc geninfo_all_blocks=1 00:30:15.665 --rc geninfo_unexecuted_blocks=1 00:30:15.665 00:30:15.665 ' 00:30:15.665 14:13:54 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:15.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:15.665 --rc genhtml_branch_coverage=1 00:30:15.665 --rc genhtml_function_coverage=1 00:30:15.665 --rc genhtml_legend=1 00:30:15.665 --rc geninfo_all_blocks=1 00:30:15.665 --rc geninfo_unexecuted_blocks=1 00:30:15.665 00:30:15.665 ' 00:30:15.665 14:13:54 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:15.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:15.665 --rc genhtml_branch_coverage=1 00:30:15.665 --rc genhtml_function_coverage=1 00:30:15.665 --rc genhtml_legend=1 00:30:15.665 --rc geninfo_all_blocks=1 00:30:15.665 --rc geninfo_unexecuted_blocks=1 00:30:15.665 00:30:15.665 ' 00:30:15.665 14:13:54 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:15.665 14:13:54 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:30:15.665 14:13:54 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:15.665 14:13:54 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:15.665 14:13:54 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:15.665 14:13:54 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:15.665 14:13:54 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:15.665 14:13:54 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:15.665 14:13:54 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:15.665 14:13:54 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:15.665 14:13:54 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:15.665 14:13:54 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:15.665 14:13:54 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:15.665 14:13:54 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:15.665 14:13:54 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:15.665 14:13:54 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:15.665 14:13:54 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:15.665 14:13:54 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:15.665 14:13:54 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:15.665 14:13:54 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:30:15.665 14:13:54 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:15.665 14:13:54 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:15.665 14:13:54 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:15.665 14:13:54 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.665 14:13:54 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.665 14:13:54 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.665 14:13:54 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:15.666 14:13:54 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.666 14:13:54 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:30:15.666 14:13:54 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:15.666 14:13:54 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:15.666 14:13:54 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:15.666 14:13:54 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:15.666 14:13:54 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:15.666 14:13:54 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:15.666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:15.666 14:13:54 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:15.666 14:13:54 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:15.666 14:13:54 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:15.666 14:13:54 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:15.666 14:13:54 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:30:15.666 14:13:54 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:15.666 14:13:54 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:15.666 14:13:54 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:15.666 14:13:54 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.666 14:13:54 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.666 14:13:54 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.666 14:13:54 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:15.666 14:13:54 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.666 14:13:54 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:15.666 14:13:54 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:15.666 14:13:54 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:15.666 14:13:54 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:15.666 14:13:54 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:15.666 14:13:54 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:15.666 14:13:54 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:15.666 14:13:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:15.666 14:13:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.666 14:13:54 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:15.666 14:13:54 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:15.666 14:13:54 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:30:15.666 14:13:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:20.938 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:20.938 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:30:20.938 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:20.938 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:20.938 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:20.938 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:20.938 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:20.938 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:30:20.938 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:20.938 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:30:20.938 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:30:20.938 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:30:20.938 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:30:20.938 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:30:20.938 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:30:20.938 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:20.938 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:20.938 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:20.938 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:20.938 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:20.938 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:20.938 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:20.938 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:20.938 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:20.938 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:20.939 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:20.939 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:20.939 Found net devices under 0000:31:00.0: cvl_0_0 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:20.939 Found net devices under 0000:31:00.1: cvl_0_1 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:20.939 14:13:59 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:20.939 14:14:00 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:20.939 14:14:00 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:20.939 14:14:00 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:20.939 14:14:00 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:20.939 14:14:00 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:20.939 14:14:00 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:20.939 14:14:00 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:20.939 14:14:00 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:20.939 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:20.939 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.581 ms 00:30:20.939 00:30:20.939 --- 10.0.0.2 ping statistics --- 00:30:20.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:20.939 rtt min/avg/max/mdev = 0.581/0.581/0.581/0.000 ms 00:30:20.939 14:14:00 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:20.939 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:20.939 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:30:20.939 00:30:20.939 --- 10.0.0.1 ping statistics --- 00:30:20.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:20.939 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:30:20.939 14:14:00 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:20.939 14:14:00 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:30:20.939 14:14:00 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:20.939 14:14:00 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:20.939 14:14:00 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:20.939 14:14:00 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:20.939 14:14:00 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:20.939 14:14:00 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:20.939 14:14:00 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:20.939 14:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:20.939 14:14:00 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:20.939 14:14:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:20.939 14:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:20.939 14:14:00 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:30:20.939 14:14:00 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:30:20.939 14:14:00 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:30:20.939 14:14:00 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:30:20.939 14:14:00 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:30:20.939 14:14:00 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:30:20.939 14:14:00 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:20.939 14:14:00 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:30:20.939 14:14:00 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:21.199 14:14:00 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:30:21.199 14:14:00 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:30:21.199 14:14:00 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:30:21.199 14:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:30:21.199 14:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:30:21.199 14:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:21.199 14:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:21.199 14:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:21.458 14:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605499 00:30:21.458 14:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:21.458 14:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:21.458 14:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:22.026 14:14:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:30:22.026 14:14:01 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:22.026 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:22.026 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:22.026 14:14:01 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:22.026 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:22.026 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:22.026 14:14:01 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1157703 00:30:22.026 14:14:01 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:22.026 14:14:01 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1157703 00:30:22.026 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # '[' -z 1157703 ']' 00:30:22.026 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:22.026 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:22.026 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:22.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:22.026 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:22.026 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:22.026 14:14:01 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:22.026 [2024-11-06 14:14:01.209488] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:30:22.026 [2024-11-06 14:14:01.209540] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:22.026 [2024-11-06 14:14:01.279598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:22.026 [2024-11-06 14:14:01.309938] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:22.026 [2024-11-06 14:14:01.309969] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:22.026 [2024-11-06 14:14:01.309974] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:22.026 [2024-11-06 14:14:01.309979] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:22.026 [2024-11-06 14:14:01.309983] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:22.286 [2024-11-06 14:14:01.311322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:22.286 [2024-11-06 14:14:01.311650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:22.286 [2024-11-06 14:14:01.311773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:22.286 [2024-11-06 14:14:01.311774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:22.286 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:22.286 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@866 -- # return 0 00:30:22.286 14:14:01 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:22.286 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.286 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:22.286 INFO: Log level set to 20 00:30:22.286 INFO: Requests: 00:30:22.286 { 00:30:22.286 "jsonrpc": "2.0", 00:30:22.286 "method": "nvmf_set_config", 00:30:22.286 "id": 1, 00:30:22.286 "params": { 00:30:22.286 "admin_cmd_passthru": { 00:30:22.286 "identify_ctrlr": true 00:30:22.286 } 00:30:22.286 } 00:30:22.286 } 00:30:22.286 00:30:22.286 INFO: response: 00:30:22.286 { 00:30:22.286 "jsonrpc": "2.0", 00:30:22.286 "id": 1, 00:30:22.286 "result": true 00:30:22.286 } 00:30:22.286 00:30:22.286 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.286 14:14:01 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:22.286 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.286 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:22.286 INFO: Setting log level to 20 00:30:22.286 INFO: Setting log level to 20 00:30:22.286 INFO: Log level set to 20 00:30:22.286 INFO: Log level set to 20 00:30:22.286 INFO: Requests: 00:30:22.286 { 00:30:22.286 "jsonrpc": "2.0", 00:30:22.286 "method": "framework_start_init", 00:30:22.286 "id": 1 00:30:22.286 } 00:30:22.286 00:30:22.286 INFO: Requests: 00:30:22.286 { 00:30:22.286 "jsonrpc": "2.0", 00:30:22.286 "method": "framework_start_init", 00:30:22.286 "id": 1 00:30:22.286 } 00:30:22.286 00:30:22.286 [2024-11-06 14:14:01.399335] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:22.286 INFO: response: 00:30:22.286 { 00:30:22.286 "jsonrpc": "2.0", 00:30:22.286 "id": 1, 00:30:22.286 "result": true 00:30:22.286 } 00:30:22.286 00:30:22.286 INFO: response: 00:30:22.286 { 00:30:22.286 "jsonrpc": "2.0", 00:30:22.286 "id": 1, 00:30:22.286 "result": true 00:30:22.286 } 00:30:22.286 00:30:22.286 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.286 14:14:01 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:22.286 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.286 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:22.286 INFO: Setting log level to 40 00:30:22.286 INFO: Setting log level to 40 00:30:22.286 INFO: Setting log level to 40 00:30:22.286 [2024-11-06 14:14:01.408344] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:22.286 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.286 14:14:01 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:22.286 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:22.286 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:22.286 14:14:01 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:30:22.286 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.286 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:22.545 Nvme0n1 00:30:22.545 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.545 14:14:01 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:22.545 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.545 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:22.545 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.545 14:14:01 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:22.545 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.545 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:22.545 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.545 14:14:01 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:22.545 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.546 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:22.546 [2024-11-06 14:14:01.769563] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:22.546 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.546 14:14:01 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:22.546 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.546 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:22.546 [ 00:30:22.546 { 00:30:22.546 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:22.546 "subtype": "Discovery", 00:30:22.546 "listen_addresses": [], 00:30:22.546 "allow_any_host": true, 00:30:22.546 "hosts": [] 00:30:22.546 }, 00:30:22.546 { 00:30:22.546 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:22.546 "subtype": "NVMe", 00:30:22.546 "listen_addresses": [ 00:30:22.546 { 00:30:22.546 "trtype": "TCP", 00:30:22.546 "adrfam": "IPv4", 00:30:22.546 "traddr": "10.0.0.2", 00:30:22.546 "trsvcid": "4420" 00:30:22.546 } 00:30:22.546 ], 00:30:22.546 "allow_any_host": true, 00:30:22.546 "hosts": [], 00:30:22.546 "serial_number": "SPDK00000000000001", 00:30:22.546 "model_number": "SPDK bdev Controller", 00:30:22.546 "max_namespaces": 1, 00:30:22.546 "min_cntlid": 1, 00:30:22.546 "max_cntlid": 65519, 00:30:22.546 "namespaces": [ 00:30:22.546 { 00:30:22.546 "nsid": 1, 00:30:22.546 "bdev_name": "Nvme0n1", 00:30:22.546 "name": "Nvme0n1", 00:30:22.546 "nguid": "363447305260549900253845000000A3", 00:30:22.546 "uuid": "36344730-5260-5499-0025-3845000000a3" 00:30:22.546 } 00:30:22.546 ] 00:30:22.546 } 00:30:22.546 ] 00:30:22.546 14:14:01 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.546 14:14:01 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:22.546 14:14:01 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:22.546 14:14:01 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:22.805 14:14:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605499 00:30:22.805 14:14:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:22.805 14:14:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:22.805 14:14:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:23.065 14:14:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:30:23.065 14:14:02 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605499 '!=' S64GNE0R605499 ']' 00:30:23.065 14:14:02 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:30:23.065 14:14:02 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:23.065 14:14:02 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.065 14:14:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:23.065 14:14:02 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.065 14:14:02 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:23.065 14:14:02 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:23.065 14:14:02 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:23.065 14:14:02 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:30:23.065 14:14:02 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:23.065 14:14:02 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:30:23.065 14:14:02 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:23.065 14:14:02 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:23.065 rmmod nvme_tcp 00:30:23.065 rmmod nvme_fabrics 00:30:23.065 rmmod nvme_keyring 00:30:23.065 14:14:02 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:23.065 14:14:02 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:30:23.065 14:14:02 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:30:23.065 14:14:02 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1157703 ']' 00:30:23.065 14:14:02 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1157703 00:30:23.065 14:14:02 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' -z 1157703 ']' 00:30:23.065 14:14:02 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # kill -0 1157703 00:30:23.065 14:14:02 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # uname 00:30:23.065 14:14:02 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:23.065 14:14:02 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1157703 00:30:23.065 14:14:02 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:23.065 14:14:02 nvmf_identify_passthru -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:23.065 14:14:02 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1157703' 00:30:23.065 killing process with pid 1157703 00:30:23.065 14:14:02 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # kill 1157703 00:30:23.065 14:14:02 nvmf_identify_passthru -- common/autotest_common.sh@976 -- # wait 1157703 00:30:23.324 14:14:02 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:23.324 14:14:02 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:23.324 14:14:02 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:23.324 14:14:02 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:30:23.324 14:14:02 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:30:23.324 14:14:02 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:23.324 14:14:02 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:30:23.324 14:14:02 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:23.324 14:14:02 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:23.324 14:14:02 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.324 14:14:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:23.324 14:14:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.861 14:14:04 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:25.861 00:30:25.861 real 0m9.833s 00:30:25.861 user 0m6.237s 00:30:25.861 sys 0m4.766s 00:30:25.861 14:14:04 nvmf_identify_passthru -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:25.861 14:14:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:25.861 ************************************ 00:30:25.861 END TEST nvmf_identify_passthru 00:30:25.861 ************************************ 00:30:25.861 14:14:04 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:25.861 14:14:04 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:25.861 14:14:04 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:25.861 14:14:04 -- common/autotest_common.sh@10 -- # set +x 00:30:25.861 ************************************ 00:30:25.861 START TEST nvmf_dif 00:30:25.861 ************************************ 00:30:25.861 14:14:04 nvmf_dif -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:25.861 * Looking for test storage... 00:30:25.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:25.861 14:14:04 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:25.861 14:14:04 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:30:25.861 14:14:04 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:25.861 14:14:04 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:25.861 14:14:04 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:25.861 14:14:04 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:25.861 14:14:04 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:25.861 14:14:04 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:30:25.861 14:14:04 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:30:25.861 14:14:04 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:30:25.861 14:14:04 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:30:25.861 14:14:04 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:30:25.861 14:14:04 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:30:25.861 14:14:04 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:30:25.861 14:14:04 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:25.861 14:14:04 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:30:25.861 14:14:04 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:30:25.861 14:14:04 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:25.861 14:14:04 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:25.861 14:14:04 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:30:25.861 14:14:04 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:30:25.861 14:14:04 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:25.861 14:14:04 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:30:25.861 14:14:04 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:30:25.861 14:14:04 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:30:25.861 14:14:04 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:30:25.861 14:14:04 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:25.861 14:14:04 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:30:25.861 14:14:04 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:30:25.861 14:14:04 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:25.861 14:14:04 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:25.861 14:14:04 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:30:25.861 14:14:04 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:25.861 14:14:04 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:25.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.861 --rc genhtml_branch_coverage=1 00:30:25.861 --rc genhtml_function_coverage=1 00:30:25.861 --rc genhtml_legend=1 00:30:25.861 --rc geninfo_all_blocks=1 00:30:25.861 --rc geninfo_unexecuted_blocks=1 00:30:25.861 00:30:25.861 ' 00:30:25.862 14:14:04 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:25.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.862 --rc genhtml_branch_coverage=1 00:30:25.862 --rc genhtml_function_coverage=1 00:30:25.862 --rc genhtml_legend=1 00:30:25.862 --rc geninfo_all_blocks=1 00:30:25.862 --rc geninfo_unexecuted_blocks=1 00:30:25.862 00:30:25.862 ' 00:30:25.862 14:14:04 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:25.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.862 --rc genhtml_branch_coverage=1 00:30:25.862 --rc genhtml_function_coverage=1 00:30:25.862 --rc genhtml_legend=1 00:30:25.862 --rc geninfo_all_blocks=1 00:30:25.862 --rc geninfo_unexecuted_blocks=1 00:30:25.862 00:30:25.862 ' 00:30:25.862 14:14:04 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:25.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.862 --rc genhtml_branch_coverage=1 00:30:25.862 --rc genhtml_function_coverage=1 00:30:25.862 --rc genhtml_legend=1 00:30:25.862 --rc geninfo_all_blocks=1 00:30:25.862 --rc geninfo_unexecuted_blocks=1 00:30:25.862 00:30:25.862 ' 00:30:25.862 14:14:04 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:25.862 14:14:04 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:30:25.862 14:14:04 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:25.862 14:14:04 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:25.862 14:14:04 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:25.862 14:14:04 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:25.862 14:14:04 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:25.862 14:14:04 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:25.862 14:14:04 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:25.862 14:14:04 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:25.862 14:14:04 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:25.862 14:14:04 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:25.862 14:14:04 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:25.862 14:14:04 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:25.862 14:14:04 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:25.862 14:14:04 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:25.862 14:14:04 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:25.862 14:14:04 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:25.862 14:14:04 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:25.862 14:14:04 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:30:25.862 14:14:04 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:25.862 14:14:04 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:25.862 14:14:04 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:25.862 14:14:04 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.862 14:14:04 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.862 14:14:04 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.862 14:14:04 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:30:25.862 14:14:04 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.862 14:14:04 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:30:25.862 14:14:04 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:25.862 14:14:04 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:25.862 14:14:04 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:25.862 14:14:04 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:25.862 14:14:04 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:25.862 14:14:04 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:25.862 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:25.862 14:14:04 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:25.862 14:14:04 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:25.862 14:14:04 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:25.862 14:14:04 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:30:25.862 14:14:04 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:25.862 14:14:04 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:25.862 14:14:04 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:30:25.862 14:14:04 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:30:25.862 14:14:04 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:25.862 14:14:04 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:25.862 14:14:04 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:25.862 14:14:04 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:25.862 14:14:04 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:25.862 14:14:04 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:25.862 14:14:04 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:25.862 14:14:04 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.862 14:14:04 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:25.862 14:14:04 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:25.862 14:14:04 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:30:25.862 14:14:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:31.143 14:14:09 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:31.143 14:14:09 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:30:31.143 14:14:09 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:31.143 14:14:09 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:31.143 14:14:09 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:31.143 14:14:09 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:31.143 14:14:09 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:31.143 14:14:09 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:30:31.143 14:14:09 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:31.143 14:14:09 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:30:31.143 14:14:09 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:30:31.143 14:14:09 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:30:31.143 14:14:09 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:30:31.143 14:14:09 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:30:31.143 14:14:09 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:30:31.143 14:14:09 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:31.143 14:14:09 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:31.143 14:14:09 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:31.143 14:14:09 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:31.143 14:14:09 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:31.143 14:14:09 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:31.143 14:14:09 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:31.143 14:14:09 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:31.143 14:14:09 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:31.143 14:14:09 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:31.143 14:14:09 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:31.143 14:14:09 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:31.143 14:14:09 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:31.143 14:14:09 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:31.143 14:14:09 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:31.143 14:14:09 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:31.143 14:14:09 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:31.143 14:14:09 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:31.143 14:14:09 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:31.143 14:14:09 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:31.144 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:31.144 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:31.144 Found net devices under 0000:31:00.0: cvl_0_0 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:31.144 Found net devices under 0000:31:00.1: cvl_0_1 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:31.144 14:14:09 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:31.144 14:14:10 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:31.144 14:14:10 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:31.144 14:14:10 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:31.144 14:14:10 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:31.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:31.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:30:31.144 00:30:31.144 --- 10.0.0.2 ping statistics --- 00:30:31.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.144 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:30:31.144 14:14:10 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:31.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:31.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:30:31.144 00:30:31.144 --- 10.0.0.1 ping statistics --- 00:30:31.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.144 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:30:31.144 14:14:10 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:31.144 14:14:10 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:30:31.144 14:14:10 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:30:31.144 14:14:10 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:33.684 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:33.684 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:33.684 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:33.684 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:33.684 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:33.684 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:33.684 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:33.684 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:33.684 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:30:33.684 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:33.685 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:33.685 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:33.685 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:33.685 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:33.685 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:33.685 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:33.685 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:33.685 14:14:12 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:33.685 14:14:12 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:33.685 14:14:12 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:33.685 14:14:12 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:33.685 14:14:12 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:33.685 14:14:12 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:33.685 14:14:12 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:33.685 14:14:12 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:33.685 14:14:12 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:33.685 14:14:12 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:33.685 14:14:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:33.685 14:14:12 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1163677 00:30:33.685 14:14:12 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1163677 00:30:33.685 14:14:12 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:33.685 14:14:12 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 1163677 ']' 00:30:33.685 14:14:12 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:33.685 14:14:12 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:33.685 14:14:12 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:33.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:33.685 14:14:12 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:33.685 14:14:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:33.685 [2024-11-06 14:14:12.574955] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:30:33.685 [2024-11-06 14:14:12.575017] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:33.685 [2024-11-06 14:14:12.666888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.685 [2024-11-06 14:14:12.720166] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:33.685 [2024-11-06 14:14:12.720217] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:33.685 [2024-11-06 14:14:12.720227] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:33.685 [2024-11-06 14:14:12.720234] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:33.685 [2024-11-06 14:14:12.720241] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:33.685 [2024-11-06 14:14:12.721042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:34.254 14:14:13 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:34.254 14:14:13 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:30:34.254 14:14:13 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:34.254 14:14:13 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:34.254 14:14:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:34.254 14:14:13 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:34.254 14:14:13 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:34.254 14:14:13 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:34.255 14:14:13 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.255 14:14:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:34.255 [2024-11-06 14:14:13.385331] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:34.255 14:14:13 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.255 14:14:13 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:34.255 14:14:13 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:34.255 14:14:13 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:34.255 14:14:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:34.255 ************************************ 00:30:34.255 START TEST fio_dif_1_default 00:30:34.255 ************************************ 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:34.255 bdev_null0 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:34.255 [2024-11-06 14:14:13.441620] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:34.255 { 00:30:34.255 "params": { 00:30:34.255 "name": "Nvme$subsystem", 00:30:34.255 "trtype": "$TEST_TRANSPORT", 00:30:34.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:34.255 "adrfam": "ipv4", 00:30:34.255 "trsvcid": "$NVMF_PORT", 00:30:34.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:34.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:34.255 "hdgst": ${hdgst:-false}, 00:30:34.255 "ddgst": ${ddgst:-false} 00:30:34.255 }, 00:30:34.255 "method": "bdev_nvme_attach_controller" 00:30:34.255 } 00:30:34.255 EOF 00:30:34.255 )") 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:34.255 "params": { 00:30:34.255 "name": "Nvme0", 00:30:34.255 "trtype": "tcp", 00:30:34.255 "traddr": "10.0.0.2", 00:30:34.255 "adrfam": "ipv4", 00:30:34.255 "trsvcid": "4420", 00:30:34.255 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:34.255 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:34.255 "hdgst": false, 00:30:34.255 "ddgst": false 00:30:34.255 }, 00:30:34.255 "method": "bdev_nvme_attach_controller" 00:30:34.255 }' 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:34.255 14:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:34.823 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:34.823 fio-3.35 00:30:34.823 Starting 1 thread 00:30:47.046 00:30:47.046 filename0: (groupid=0, jobs=1): err= 0: pid=1164278: Wed Nov 6 14:14:24 2024 00:30:47.046 read: IOPS=200, BW=800KiB/s (820kB/s)(8032KiB/10036msec) 00:30:47.046 slat (nsec): min=4583, max=33306, avg=5939.60, stdev=1312.63 00:30:47.046 clat (usec): min=466, max=46524, avg=19974.93, stdev=20152.93 00:30:47.046 lat (usec): min=471, max=46550, avg=19980.87, stdev=20152.76 00:30:47.046 clat percentiles (usec): 00:30:47.046 | 1.00th=[ 603], 5.00th=[ 791], 10.00th=[ 807], 20.00th=[ 832], 00:30:47.046 | 30.00th=[ 848], 40.00th=[ 873], 50.00th=[ 963], 60.00th=[41157], 00:30:47.046 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:47.046 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:30:47.046 | 99.99th=[46400] 00:30:47.046 bw ( KiB/s): min= 704, max= 1504, per=100.00%, avg=801.60, stdev=165.94, samples=20 00:30:47.046 iops : min= 176, max= 376, avg=200.40, stdev=41.49, samples=20 00:30:47.046 lat (usec) : 500=0.20%, 750=1.59%, 1000=50.00% 00:30:47.046 lat (msec) : 2=0.80%, 50=47.41% 00:30:47.046 cpu : usr=93.20%, sys=6.60%, ctx=12, majf=0, minf=248 00:30:47.046 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:47.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.046 issued rwts: total=2008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.046 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:47.046 00:30:47.046 Run status group 0 (all jobs): 00:30:47.046 READ: bw=800KiB/s (820kB/s), 800KiB/s-800KiB/s (820kB/s-820kB/s), io=8032KiB (8225kB), run=10036-10036msec 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.046 00:30:47.046 real 0m11.219s 00:30:47.046 user 0m23.901s 00:30:47.046 sys 0m0.985s 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:47.046 ************************************ 00:30:47.046 END TEST fio_dif_1_default 00:30:47.046 ************************************ 00:30:47.046 14:14:24 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:47.046 14:14:24 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:47.046 14:14:24 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:47.046 14:14:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:47.046 ************************************ 00:30:47.046 START TEST fio_dif_1_multi_subsystems 00:30:47.046 ************************************ 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:47.046 bdev_null0 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:47.046 [2024-11-06 14:14:24.708058] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:47.046 bdev_null1 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:47.046 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:47.046 { 00:30:47.046 "params": { 00:30:47.046 "name": "Nvme$subsystem", 00:30:47.046 "trtype": "$TEST_TRANSPORT", 00:30:47.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:47.046 "adrfam": "ipv4", 00:30:47.046 "trsvcid": "$NVMF_PORT", 00:30:47.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:47.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:47.046 "hdgst": ${hdgst:-false}, 00:30:47.046 "ddgst": ${ddgst:-false} 00:30:47.047 }, 00:30:47.047 "method": "bdev_nvme_attach_controller" 00:30:47.047 } 00:30:47.047 EOF 00:30:47.047 )") 00:30:47.047 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:47.047 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:47.047 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:47.047 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:30:47.047 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:47.047 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:30:47.047 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:30:47.047 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:47.047 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:47.047 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:47.047 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:47.047 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:47.047 { 00:30:47.047 "params": { 00:30:47.047 "name": "Nvme$subsystem", 00:30:47.047 "trtype": "$TEST_TRANSPORT", 00:30:47.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:47.047 "adrfam": "ipv4", 00:30:47.047 "trsvcid": "$NVMF_PORT", 00:30:47.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:47.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:47.047 "hdgst": ${hdgst:-false}, 00:30:47.047 "ddgst": ${ddgst:-false} 00:30:47.047 }, 00:30:47.047 "method": "bdev_nvme_attach_controller" 00:30:47.047 } 00:30:47.047 EOF 00:30:47.047 )") 00:30:47.047 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:47.047 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:47.047 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:30:47.047 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:30:47.047 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:30:47.047 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:47.047 "params": { 00:30:47.047 "name": "Nvme0", 00:30:47.047 "trtype": "tcp", 00:30:47.047 "traddr": "10.0.0.2", 00:30:47.047 "adrfam": "ipv4", 00:30:47.047 "trsvcid": "4420", 00:30:47.047 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:47.047 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:47.047 "hdgst": false, 00:30:47.047 "ddgst": false 00:30:47.047 }, 00:30:47.047 "method": "bdev_nvme_attach_controller" 00:30:47.047 },{ 00:30:47.047 "params": { 00:30:47.047 "name": "Nvme1", 00:30:47.047 "trtype": "tcp", 00:30:47.047 "traddr": "10.0.0.2", 00:30:47.047 "adrfam": "ipv4", 00:30:47.047 "trsvcid": "4420", 00:30:47.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:47.047 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:47.047 "hdgst": false, 00:30:47.047 "ddgst": false 00:30:47.047 }, 00:30:47.047 "method": "bdev_nvme_attach_controller" 00:30:47.047 }' 00:30:47.047 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:30:47.047 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:30:47.047 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:30:47.047 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:30:47.047 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:47.047 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:30:47.047 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:30:47.047 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:30:47.047 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:47.047 14:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:47.047 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:47.047 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:47.047 fio-3.35 00:30:47.047 Starting 2 threads 00:30:57.029 00:30:57.029 filename0: (groupid=0, jobs=1): err= 0: pid=1167036: Wed Nov 6 14:14:35 2024 00:30:57.029 read: IOPS=97, BW=389KiB/s (398kB/s)(3904KiB/10033msec) 00:30:57.029 slat (nsec): min=2852, max=14324, avg=5713.07, stdev=510.84 00:30:57.029 clat (usec): min=40831, max=45141, avg=41100.14, stdev=396.18 00:30:57.029 lat (usec): min=40837, max=45151, avg=41105.86, stdev=396.15 00:30:57.029 clat percentiles (usec): 00:30:57.029 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:30:57.029 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:57.029 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:30:57.029 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:30:57.029 | 99.99th=[45351] 00:30:57.029 bw ( KiB/s): min= 352, max= 416, per=33.82%, avg=388.80, stdev=15.66, samples=20 00:30:57.029 iops : min= 88, max= 104, avg=97.20, stdev= 3.91, samples=20 00:30:57.029 lat (msec) : 50=100.00% 00:30:57.029 cpu : usr=95.55%, sys=4.25%, ctx=9, majf=0, minf=93 00:30:57.029 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:57.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.029 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.029 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.029 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:57.029 filename1: (groupid=0, jobs=1): err= 0: pid=1167037: Wed Nov 6 14:14:35 2024 00:30:57.029 read: IOPS=189, BW=758KiB/s (777kB/s)(7616KiB/10041msec) 00:30:57.029 slat (nsec): min=4033, max=20690, avg=5720.04, stdev=636.65 00:30:57.029 clat (usec): min=506, max=43352, avg=21077.68, stdev=20148.37 00:30:57.029 lat (usec): min=512, max=43372, avg=21083.40, stdev=20148.35 00:30:57.029 clat percentiles (usec): 00:30:57.029 | 1.00th=[ 611], 5.00th=[ 807], 10.00th=[ 832], 20.00th=[ 848], 00:30:57.029 | 30.00th=[ 865], 40.00th=[ 881], 50.00th=[40633], 60.00th=[41157], 00:30:57.029 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:57.029 | 99.00th=[41681], 99.50th=[41681], 99.90th=[43254], 99.95th=[43254], 00:30:57.029 | 99.99th=[43254] 00:30:57.029 bw ( KiB/s): min= 673, max= 768, per=66.24%, avg=760.05, stdev=24.98, samples=20 00:30:57.029 iops : min= 168, max= 192, avg=190.00, stdev= 6.29, samples=20 00:30:57.029 lat (usec) : 750=2.10%, 1000=46.48% 00:30:57.029 lat (msec) : 2=1.21%, 50=50.21% 00:30:57.029 cpu : usr=95.34%, sys=4.47%, ctx=7, majf=0, minf=155 00:30:57.029 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:57.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.029 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.029 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.029 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:57.029 00:30:57.029 Run status group 0 (all jobs): 00:30:57.029 READ: bw=1147KiB/s (1175kB/s), 389KiB/s-758KiB/s (398kB/s-777kB/s), io=11.2MiB (11.8MB), run=10033-10041msec 00:30:57.029 14:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:57.029 14:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:30:57.029 14:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:57.029 14:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:57.029 14:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:30:57.029 14:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:57.029 14:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.029 14:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:57.029 14:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.029 14:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:57.029 14:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.029 14:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:57.029 14:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.029 14:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:57.029 14:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:57.029 14:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:30:57.029 14:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:57.029 14:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.029 14:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:57.029 14:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.029 14:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:57.029 14:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.029 14:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:57.030 14:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.030 00:30:57.030 real 0m11.428s 00:30:57.030 user 0m32.299s 00:30:57.030 sys 0m1.195s 00:30:57.030 14:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:57.030 14:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:57.030 ************************************ 00:30:57.030 END TEST fio_dif_1_multi_subsystems 00:30:57.030 ************************************ 00:30:57.030 14:14:36 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:57.030 14:14:36 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:57.030 14:14:36 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:57.030 14:14:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:57.030 ************************************ 00:30:57.030 START TEST fio_dif_rand_params 00:30:57.030 ************************************ 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:57.030 bdev_null0 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:57.030 [2024-11-06 14:14:36.187964] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:57.030 { 00:30:57.030 "params": { 00:30:57.030 "name": "Nvme$subsystem", 00:30:57.030 "trtype": "$TEST_TRANSPORT", 00:30:57.030 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:57.030 "adrfam": "ipv4", 00:30:57.030 "trsvcid": "$NVMF_PORT", 00:30:57.030 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:57.030 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:57.030 "hdgst": ${hdgst:-false}, 00:30:57.030 "ddgst": ${ddgst:-false} 00:30:57.030 }, 00:30:57.030 "method": "bdev_nvme_attach_controller" 00:30:57.030 } 00:30:57.030 EOF 00:30:57.030 )") 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:57.030 "params": { 00:30:57.030 "name": "Nvme0", 00:30:57.030 "trtype": "tcp", 00:30:57.030 "traddr": "10.0.0.2", 00:30:57.030 "adrfam": "ipv4", 00:30:57.030 "trsvcid": "4420", 00:30:57.030 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:57.030 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:57.030 "hdgst": false, 00:30:57.030 "ddgst": false 00:30:57.030 }, 00:30:57.030 "method": "bdev_nvme_attach_controller" 00:30:57.030 }' 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:57.030 14:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:57.290 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:57.290 ... 00:30:57.290 fio-3.35 00:30:57.290 Starting 3 threads 00:31:03.870 00:31:03.870 filename0: (groupid=0, jobs=1): err= 0: pid=1169552: Wed Nov 6 14:14:42 2024 00:31:03.870 read: IOPS=330, BW=41.3MiB/s (43.4MB/s)(209MiB/5046msec) 00:31:03.870 slat (nsec): min=4208, max=21064, avg=6192.97, stdev=700.55 00:31:03.870 clat (usec): min=4335, max=89108, avg=9037.72, stdev=6053.39 00:31:03.870 lat (usec): min=4341, max=89114, avg=9043.91, stdev=6053.40 00:31:03.870 clat percentiles (usec): 00:31:03.870 | 1.00th=[ 5014], 5.00th=[ 5735], 10.00th=[ 6128], 20.00th=[ 6849], 00:31:03.870 | 30.00th=[ 7373], 40.00th=[ 7832], 50.00th=[ 8225], 60.00th=[ 8848], 00:31:03.870 | 70.00th=[ 9503], 80.00th=[10028], 90.00th=[10683], 95.00th=[11207], 00:31:03.870 | 99.00th=[46400], 99.50th=[47973], 99.90th=[87557], 99.95th=[88605], 00:31:03.870 | 99.99th=[88605] 00:31:03.870 bw ( KiB/s): min=26112, max=49152, per=35.93%, avg=42649.60, stdev=6794.81, samples=10 00:31:03.870 iops : min= 204, max= 384, avg=333.20, stdev=53.08, samples=10 00:31:03.870 lat (msec) : 10=78.85%, 20=19.65%, 50=1.26%, 100=0.24% 00:31:03.870 cpu : usr=96.31%, sys=3.47%, ctx=7, majf=0, minf=47 00:31:03.870 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:03.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.870 issued rwts: total=1669,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.870 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:03.870 filename0: (groupid=0, jobs=1): err= 0: pid=1169553: Wed Nov 6 14:14:42 2024 00:31:03.870 read: IOPS=226, BW=28.4MiB/s (29.7MB/s)(143MiB/5045msec) 00:31:03.870 slat (nsec): min=4071, max=32109, avg=6421.48, stdev=1630.62 00:31:03.870 clat (usec): min=4041, max=91952, avg=13174.57, stdev=14484.90 00:31:03.870 lat (usec): min=4047, max=91958, avg=13180.99, stdev=14484.84 00:31:03.870 clat percentiles (usec): 00:31:03.870 | 1.00th=[ 4883], 5.00th=[ 5604], 10.00th=[ 6194], 20.00th=[ 7177], 00:31:03.870 | 30.00th=[ 7701], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 8848], 00:31:03.870 | 70.00th=[ 9503], 80.00th=[10159], 90.00th=[47449], 95.00th=[49021], 00:31:03.870 | 99.00th=[51119], 99.50th=[90702], 99.90th=[91751], 99.95th=[91751], 00:31:03.870 | 99.99th=[91751] 00:31:03.870 bw ( KiB/s): min=19712, max=41811, per=24.64%, avg=29243.50, stdev=7739.19, samples=10 00:31:03.870 iops : min= 154, max= 326, avg=228.40, stdev=60.35, samples=10 00:31:03.870 lat (msec) : 10=77.90%, 20=10.74%, 50=8.47%, 100=2.88% 00:31:03.870 cpu : usr=96.61%, sys=3.17%, ctx=8, majf=0, minf=130 00:31:03.870 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:03.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.870 issued rwts: total=1145,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.870 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:03.870 filename0: (groupid=0, jobs=1): err= 0: pid=1169554: Wed Nov 6 14:14:42 2024 00:31:03.870 read: IOPS=369, BW=46.2MiB/s (48.5MB/s)(233MiB/5045msec) 00:31:03.870 slat (nsec): min=4522, max=29450, avg=6163.37, stdev=1245.81 00:31:03.870 clat (usec): min=3835, max=87279, avg=8084.48, stdev=7270.44 00:31:03.870 lat (usec): min=3841, max=87287, avg=8090.64, stdev=7270.48 00:31:03.870 clat percentiles (usec): 00:31:03.870 | 1.00th=[ 4424], 5.00th=[ 4883], 10.00th=[ 5211], 20.00th=[ 5735], 00:31:03.870 | 30.00th=[ 6128], 40.00th=[ 6521], 50.00th=[ 6783], 60.00th=[ 7111], 00:31:03.871 | 70.00th=[ 7504], 80.00th=[ 7898], 90.00th=[ 8586], 95.00th=[ 9896], 00:31:03.871 | 99.00th=[47449], 99.50th=[47973], 99.90th=[49021], 99.95th=[87557], 00:31:03.871 | 99.99th=[87557] 00:31:03.871 bw ( KiB/s): min=31232, max=60416, per=40.18%, avg=47692.80, stdev=9704.81, samples=10 00:31:03.871 iops : min= 244, max= 472, avg=372.60, stdev=75.82, samples=10 00:31:03.871 lat (msec) : 4=0.05%, 10=95.50%, 20=1.34%, 50=3.06%, 100=0.05% 00:31:03.871 cpu : usr=96.41%, sys=3.37%, ctx=7, majf=0, minf=118 00:31:03.871 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:03.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.871 issued rwts: total=1865,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.871 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:03.871 00:31:03.871 Run status group 0 (all jobs): 00:31:03.871 READ: bw=116MiB/s (122MB/s), 28.4MiB/s-46.2MiB/s (29.7MB/s-48.5MB/s), io=585MiB (613MB), run=5045-5046msec 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.871 bdev_null0 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.871 [2024-11-06 14:14:42.342759] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.871 bdev_null1 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.871 bdev_null2 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:03.871 14:14:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:03.871 { 00:31:03.871 "params": { 00:31:03.871 "name": "Nvme$subsystem", 00:31:03.871 "trtype": "$TEST_TRANSPORT", 00:31:03.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:03.872 "adrfam": "ipv4", 00:31:03.872 "trsvcid": "$NVMF_PORT", 00:31:03.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:03.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:03.872 "hdgst": ${hdgst:-false}, 00:31:03.872 "ddgst": ${ddgst:-false} 00:31:03.872 }, 00:31:03.872 "method": "bdev_nvme_attach_controller" 00:31:03.872 } 00:31:03.872 EOF 00:31:03.872 )") 00:31:03.872 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:03.872 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:03.872 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:03.872 14:14:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:03.872 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:03.872 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:03.872 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:31:03.872 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:03.872 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:03.872 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:31:03.872 14:14:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:03.872 14:14:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:03.872 { 00:31:03.872 "params": { 00:31:03.872 "name": "Nvme$subsystem", 00:31:03.872 "trtype": "$TEST_TRANSPORT", 00:31:03.872 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:03.872 "adrfam": "ipv4", 00:31:03.872 "trsvcid": "$NVMF_PORT", 00:31:03.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:03.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:03.872 "hdgst": ${hdgst:-false}, 00:31:03.872 "ddgst": ${ddgst:-false} 00:31:03.872 }, 00:31:03.872 "method": "bdev_nvme_attach_controller" 00:31:03.872 } 00:31:03.872 EOF 00:31:03.872 )") 00:31:03.872 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:03.872 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:03.872 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:03.872 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:03.872 14:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:03.872 14:14:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:03.872 14:14:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:03.872 14:14:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:03.872 { 00:31:03.872 "params": { 00:31:03.872 "name": "Nvme$subsystem", 00:31:03.872 "trtype": "$TEST_TRANSPORT", 00:31:03.872 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:03.872 "adrfam": "ipv4", 00:31:03.872 "trsvcid": "$NVMF_PORT", 00:31:03.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:03.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:03.872 "hdgst": ${hdgst:-false}, 00:31:03.872 "ddgst": ${ddgst:-false} 00:31:03.872 }, 00:31:03.872 "method": "bdev_nvme_attach_controller" 00:31:03.872 } 00:31:03.872 EOF 00:31:03.872 )") 00:31:03.872 14:14:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:03.872 14:14:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:31:03.872 14:14:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:31:03.872 14:14:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:03.872 "params": { 00:31:03.872 "name": "Nvme0", 00:31:03.872 "trtype": "tcp", 00:31:03.872 "traddr": "10.0.0.2", 00:31:03.872 "adrfam": "ipv4", 00:31:03.872 "trsvcid": "4420", 00:31:03.872 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:03.872 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:03.872 "hdgst": false, 00:31:03.872 "ddgst": false 00:31:03.872 }, 00:31:03.872 "method": "bdev_nvme_attach_controller" 00:31:03.872 },{ 00:31:03.872 "params": { 00:31:03.872 "name": "Nvme1", 00:31:03.872 "trtype": "tcp", 00:31:03.872 "traddr": "10.0.0.2", 00:31:03.872 "adrfam": "ipv4", 00:31:03.872 "trsvcid": "4420", 00:31:03.872 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:03.872 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:03.872 "hdgst": false, 00:31:03.872 "ddgst": false 00:31:03.872 }, 00:31:03.872 "method": "bdev_nvme_attach_controller" 00:31:03.872 },{ 00:31:03.872 "params": { 00:31:03.872 "name": "Nvme2", 00:31:03.872 "trtype": "tcp", 00:31:03.872 "traddr": "10.0.0.2", 00:31:03.872 "adrfam": "ipv4", 00:31:03.872 "trsvcid": "4420", 00:31:03.872 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:03.872 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:03.872 "hdgst": false, 00:31:03.872 "ddgst": false 00:31:03.872 }, 00:31:03.872 "method": "bdev_nvme_attach_controller" 00:31:03.872 }' 00:31:03.872 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:31:03.872 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:31:03.872 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:31:03.872 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:31:03.872 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:03.872 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:31:03.872 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:31:03.872 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:31:03.872 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:03.872 14:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:03.872 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:03.872 ... 00:31:03.872 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:03.872 ... 00:31:03.872 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:03.872 ... 00:31:03.872 fio-3.35 00:31:03.872 Starting 24 threads 00:31:16.261 00:31:16.261 filename0: (groupid=0, jobs=1): err= 0: pid=1171066: Wed Nov 6 14:14:53 2024 00:31:16.261 read: IOPS=679, BW=2718KiB/s (2783kB/s)(26.5MiB/10002msec) 00:31:16.261 slat (nsec): min=5639, max=71919, avg=9106.00, stdev=7770.97 00:31:16.262 clat (usec): min=10626, max=40758, avg=23473.48, stdev=2617.16 00:31:16.262 lat (usec): min=10632, max=40782, avg=23482.58, stdev=2617.50 00:31:16.262 clat percentiles (usec): 00:31:16.262 | 1.00th=[13304], 5.00th=[16712], 10.00th=[22938], 20.00th=[23462], 00:31:16.262 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:31:16.262 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:31:16.262 | 99.00th=[31589], 99.50th=[32375], 99.90th=[40633], 99.95th=[40633], 00:31:16.262 | 99.99th=[40633] 00:31:16.262 bw ( KiB/s): min= 2554, max= 3001, per=4.23%, avg=2718.05, stdev=117.88, samples=19 00:31:16.262 iops : min= 638, max= 750, avg=679.37, stdev=29.45, samples=19 00:31:16.262 lat (msec) : 20=7.56%, 50=92.44% 00:31:16.262 cpu : usr=98.87%, sys=0.79%, ctx=71, majf=0, minf=59 00:31:16.262 IO depths : 1=5.4%, 2=10.9%, 4=22.9%, 8=53.6%, 16=7.2%, 32=0.0%, >=64=0.0% 00:31:16.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.262 complete : 0=0.0%, 4=93.5%, 8=0.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.262 issued rwts: total=6796,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.262 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:16.262 filename0: (groupid=0, jobs=1): err= 0: pid=1171067: Wed Nov 6 14:14:53 2024 00:31:16.262 read: IOPS=662, BW=2651KiB/s (2714kB/s)(25.9MiB/10005msec) 00:31:16.262 slat (nsec): min=4154, max=69591, avg=19411.31, stdev=12288.27 00:31:16.262 clat (usec): min=13474, max=45868, avg=23967.36, stdev=1985.57 00:31:16.262 lat (usec): min=13480, max=45880, avg=23986.77, stdev=1985.77 00:31:16.262 clat percentiles (usec): 00:31:16.262 | 1.00th=[17433], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:31:16.262 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:31:16.262 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[25035], 00:31:16.262 | 99.00th=[31851], 99.50th=[33817], 99.90th=[45876], 99.95th=[45876], 00:31:16.262 | 99.99th=[45876] 00:31:16.262 bw ( KiB/s): min= 2427, max= 2784, per=4.12%, avg=2649.53, stdev=92.46, samples=19 00:31:16.262 iops : min= 606, max= 696, avg=662.32, stdev=23.19, samples=19 00:31:16.262 lat (msec) : 20=1.75%, 50=98.25% 00:31:16.262 cpu : usr=99.01%, sys=0.74%, ctx=14, majf=0, minf=71 00:31:16.262 IO depths : 1=5.5%, 2=11.2%, 4=23.5%, 8=52.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:31:16.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.262 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.262 issued rwts: total=6630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.262 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:16.262 filename0: (groupid=0, jobs=1): err= 0: pid=1171068: Wed Nov 6 14:14:53 2024 00:31:16.262 read: IOPS=680, BW=2723KiB/s (2789kB/s)(26.6MiB/10004msec) 00:31:16.262 slat (nsec): min=4307, max=74641, avg=14984.68, stdev=12175.51 00:31:16.262 clat (usec): min=5710, max=45148, avg=23407.69, stdev=4054.45 00:31:16.262 lat (usec): min=5716, max=45159, avg=23422.67, stdev=4055.09 00:31:16.262 clat percentiles (usec): 00:31:16.262 | 1.00th=[13173], 5.00th=[15795], 10.00th=[18220], 20.00th=[21890], 00:31:16.262 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:31:16.262 | 70.00th=[23987], 80.00th=[24249], 90.00th=[26608], 95.00th=[30016], 00:31:16.262 | 99.00th=[36963], 99.50th=[40109], 99.90th=[45351], 99.95th=[45351], 00:31:16.262 | 99.99th=[45351] 00:31:16.262 bw ( KiB/s): min= 2448, max= 2976, per=4.23%, avg=2718.95, stdev=111.22, samples=19 00:31:16.262 iops : min= 612, max= 744, avg=679.68, stdev=27.83, samples=19 00:31:16.262 lat (msec) : 10=0.21%, 20=15.64%, 50=84.16% 00:31:16.262 cpu : usr=99.10%, sys=0.63%, ctx=23, majf=0, minf=55 00:31:16.262 IO depths : 1=1.0%, 2=2.8%, 4=10.0%, 8=72.3%, 16=13.8%, 32=0.0%, >=64=0.0% 00:31:16.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.262 complete : 0=0.0%, 4=90.8%, 8=5.6%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.262 issued rwts: total=6811,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.262 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:16.262 filename0: (groupid=0, jobs=1): err= 0: pid=1171069: Wed Nov 6 14:14:53 2024 00:31:16.262 read: IOPS=679, BW=2717KiB/s (2782kB/s)(26.6MiB/10014msec) 00:31:16.262 slat (nsec): min=2857, max=64495, avg=12213.71, stdev=9960.82 00:31:16.262 clat (usec): min=10587, max=53057, avg=23486.46, stdev=4485.53 00:31:16.262 lat (usec): min=10594, max=53066, avg=23498.68, stdev=4486.14 00:31:16.262 clat percentiles (usec): 00:31:16.262 | 1.00th=[13304], 5.00th=[15926], 10.00th=[17957], 20.00th=[20317], 00:31:16.262 | 30.00th=[23200], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:31:16.262 | 70.00th=[24249], 80.00th=[24773], 90.00th=[28443], 95.00th=[31065], 00:31:16.262 | 99.00th=[39060], 99.50th=[41681], 99.90th=[53216], 99.95th=[53216], 00:31:16.262 | 99.99th=[53216] 00:31:16.262 bw ( KiB/s): min= 2436, max= 2928, per=4.23%, avg=2719.16, stdev=120.81, samples=19 00:31:16.262 iops : min= 609, max= 732, avg=679.74, stdev=30.25, samples=19 00:31:16.262 lat (msec) : 20=18.53%, 50=81.24%, 100=0.24% 00:31:16.262 cpu : usr=98.90%, sys=0.83%, ctx=68, majf=0, minf=50 00:31:16.262 IO depths : 1=1.1%, 2=2.2%, 4=7.4%, 8=75.8%, 16=13.6%, 32=0.0%, >=64=0.0% 00:31:16.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.262 complete : 0=0.0%, 4=89.8%, 8=6.7%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.262 issued rwts: total=6801,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.262 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:16.262 filename0: (groupid=0, jobs=1): err= 0: pid=1171070: Wed Nov 6 14:14:53 2024 00:31:16.262 read: IOPS=665, BW=2661KiB/s (2724kB/s)(26.0MiB/10010msec) 00:31:16.262 slat (nsec): min=2885, max=69688, avg=17394.22, stdev=11958.66 00:31:16.262 clat (usec): min=12289, max=39083, avg=23902.62, stdev=2688.76 00:31:16.262 lat (usec): min=12295, max=39090, avg=23920.02, stdev=2689.60 00:31:16.262 clat percentiles (usec): 00:31:16.262 | 1.00th=[14746], 5.00th=[19792], 10.00th=[22938], 20.00th=[23462], 00:31:16.262 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:31:16.262 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[28443], 00:31:16.262 | 99.00th=[34866], 99.50th=[36963], 99.90th=[38536], 99.95th=[39060], 00:31:16.262 | 99.99th=[39060] 00:31:16.262 bw ( KiB/s): min= 2554, max= 3008, per=4.14%, avg=2659.32, stdev=113.77, samples=19 00:31:16.262 iops : min= 638, max= 752, avg=664.79, stdev=28.48, samples=19 00:31:16.262 lat (msec) : 20=5.12%, 50=94.88% 00:31:16.262 cpu : usr=99.09%, sys=0.62%, ctx=26, majf=0, minf=41 00:31:16.262 IO depths : 1=5.2%, 2=10.5%, 4=22.3%, 8=54.5%, 16=7.6%, 32=0.0%, >=64=0.0% 00:31:16.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.262 complete : 0=0.0%, 4=93.4%, 8=1.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.262 issued rwts: total=6658,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.262 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:16.262 filename0: (groupid=0, jobs=1): err= 0: pid=1171071: Wed Nov 6 14:14:53 2024 00:31:16.262 read: IOPS=669, BW=2678KiB/s (2742kB/s)(26.2MiB/10015msec) 00:31:16.262 slat (nsec): min=5667, max=60878, avg=13997.61, stdev=10284.03 00:31:16.262 clat (usec): min=10580, max=34796, avg=23788.30, stdev=1300.69 00:31:16.262 lat (usec): min=10589, max=34826, avg=23802.30, stdev=1300.41 00:31:16.262 clat percentiles (usec): 00:31:16.262 | 1.00th=[14746], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:31:16.262 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:31:16.262 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:31:16.262 | 99.00th=[25035], 99.50th=[26084], 99.90th=[32375], 99.95th=[34866], 00:31:16.262 | 99.99th=[34866] 00:31:16.262 bw ( KiB/s): min= 2554, max= 2816, per=4.16%, avg=2673.10, stdev=58.92, samples=20 00:31:16.262 iops : min= 638, max= 704, avg=668.10, stdev=14.87, samples=20 00:31:16.262 lat (msec) : 20=1.28%, 50=98.72% 00:31:16.262 cpu : usr=98.74%, sys=0.83%, ctx=138, majf=0, minf=69 00:31:16.262 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:16.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.262 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.262 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.262 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:16.262 filename0: (groupid=0, jobs=1): err= 0: pid=1171072: Wed Nov 6 14:14:53 2024 00:31:16.262 read: IOPS=670, BW=2681KiB/s (2745kB/s)(26.2MiB/10005msec) 00:31:16.262 slat (nsec): min=4066, max=61518, avg=17423.99, stdev=11246.02 00:31:16.262 clat (usec): min=8541, max=43643, avg=23721.47, stdev=2331.45 00:31:16.262 lat (usec): min=8558, max=43655, avg=23738.89, stdev=2331.83 00:31:16.262 clat percentiles (usec): 00:31:16.262 | 1.00th=[14353], 5.00th=[20055], 10.00th=[23200], 20.00th=[23462], 00:31:16.262 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:31:16.262 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:31:16.262 | 99.00th=[32900], 99.50th=[34341], 99.90th=[43779], 99.95th=[43779], 00:31:16.262 | 99.99th=[43779] 00:31:16.262 bw ( KiB/s): min= 2554, max= 2864, per=4.17%, avg=2675.32, stdev=84.32, samples=19 00:31:16.262 iops : min= 638, max= 716, avg=668.79, stdev=21.11, samples=19 00:31:16.262 lat (msec) : 10=0.06%, 20=4.82%, 50=95.12% 00:31:16.262 cpu : usr=99.15%, sys=0.56%, ctx=53, majf=0, minf=61 00:31:16.262 IO depths : 1=3.4%, 2=9.0%, 4=22.8%, 8=55.4%, 16=9.5%, 32=0.0%, >=64=0.0% 00:31:16.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.262 complete : 0=0.0%, 4=93.7%, 8=0.9%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.262 issued rwts: total=6706,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.262 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:16.262 filename0: (groupid=0, jobs=1): err= 0: pid=1171073: Wed Nov 6 14:14:53 2024 00:31:16.262 read: IOPS=701, BW=2806KiB/s (2873kB/s)(27.5MiB/10023msec) 00:31:16.262 slat (nsec): min=2946, max=66129, avg=11036.85, stdev=8759.92 00:31:16.262 clat (usec): min=7752, max=41902, avg=22724.79, stdev=4000.27 00:31:16.262 lat (usec): min=7759, max=41939, avg=22735.83, stdev=4001.79 00:31:16.262 clat percentiles (usec): 00:31:16.262 | 1.00th=[13304], 5.00th=[15139], 10.00th=[16188], 20.00th=[19792], 00:31:16.262 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:31:16.262 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[28705], 00:31:16.262 | 99.00th=[34866], 99.50th=[36963], 99.90th=[40633], 99.95th=[41681], 00:31:16.262 | 99.99th=[41681] 00:31:16.262 bw ( KiB/s): min= 2640, max= 3169, per=4.37%, avg=2804.45, stdev=161.99, samples=20 00:31:16.262 iops : min= 660, max= 792, avg=700.95, stdev=40.48, samples=20 00:31:16.262 lat (msec) : 10=0.30%, 20=19.97%, 50=79.73% 00:31:16.262 cpu : usr=98.55%, sys=1.05%, ctx=65, majf=0, minf=72 00:31:16.262 IO depths : 1=3.1%, 2=6.9%, 4=18.3%, 8=62.1%, 16=9.6%, 32=0.0%, >=64=0.0% 00:31:16.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.263 complete : 0=0.0%, 4=92.5%, 8=2.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.263 issued rwts: total=7031,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.263 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:16.263 filename1: (groupid=0, jobs=1): err= 0: pid=1171074: Wed Nov 6 14:14:53 2024 00:31:16.263 read: IOPS=665, BW=2661KiB/s (2725kB/s)(26.0MiB/10004msec) 00:31:16.263 slat (nsec): min=4260, max=72108, avg=13958.14, stdev=9792.23 00:31:16.263 clat (usec): min=13201, max=45263, avg=23920.62, stdev=1508.69 00:31:16.263 lat (usec): min=13208, max=45275, avg=23934.57, stdev=1508.43 00:31:16.263 clat percentiles (usec): 00:31:16.263 | 1.00th=[20579], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:31:16.263 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:31:16.263 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24511], 00:31:16.263 | 99.00th=[28443], 99.50th=[32375], 99.90th=[45351], 99.95th=[45351], 00:31:16.263 | 99.99th=[45351] 00:31:16.263 bw ( KiB/s): min= 2436, max= 2688, per=4.14%, avg=2660.63, stdev=67.54, samples=19 00:31:16.263 iops : min= 609, max= 672, avg=665.11, stdev=16.87, samples=19 00:31:16.263 lat (msec) : 20=0.84%, 50=99.16% 00:31:16.263 cpu : usr=98.98%, sys=0.76%, ctx=21, majf=0, minf=48 00:31:16.263 IO depths : 1=5.5%, 2=11.7%, 4=24.9%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:31:16.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.263 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.263 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.263 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:16.263 filename1: (groupid=0, jobs=1): err= 0: pid=1171075: Wed Nov 6 14:14:53 2024 00:31:16.263 read: IOPS=669, BW=2678KiB/s (2742kB/s)(26.2MiB/10014msec) 00:31:16.263 slat (nsec): min=5637, max=60014, avg=11307.54, stdev=8207.21 00:31:16.263 clat (usec): min=9056, max=33213, avg=23802.87, stdev=1286.37 00:31:16.263 lat (usec): min=9062, max=33220, avg=23814.18, stdev=1286.00 00:31:16.263 clat percentiles (usec): 00:31:16.263 | 1.00th=[14877], 5.00th=[22938], 10.00th=[23462], 20.00th=[23725], 00:31:16.263 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:31:16.263 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:31:16.263 | 99.00th=[24773], 99.50th=[25297], 99.90th=[32113], 99.95th=[32900], 00:31:16.263 | 99.99th=[33162] 00:31:16.263 bw ( KiB/s): min= 2554, max= 2821, per=4.16%, avg=2673.35, stdev=59.11, samples=20 00:31:16.263 iops : min= 638, max= 705, avg=668.15, stdev=14.83, samples=20 00:31:16.263 lat (msec) : 10=0.03%, 20=1.19%, 50=98.78% 00:31:16.263 cpu : usr=98.79%, sys=0.91%, ctx=107, majf=0, minf=45 00:31:16.263 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:16.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.263 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.263 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.263 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:16.263 filename1: (groupid=0, jobs=1): err= 0: pid=1171076: Wed Nov 6 14:14:53 2024 00:31:16.263 read: IOPS=663, BW=2656KiB/s (2719kB/s)(25.9MiB/10004msec) 00:31:16.263 slat (nsec): min=2992, max=68616, avg=14514.63, stdev=10471.75 00:31:16.263 clat (usec): min=6757, max=52151, avg=24030.03, stdev=2470.83 00:31:16.263 lat (usec): min=6763, max=52160, avg=24044.54, stdev=2470.44 00:31:16.263 clat percentiles (usec): 00:31:16.263 | 1.00th=[16581], 5.00th=[21103], 10.00th=[23200], 20.00th=[23462], 00:31:16.263 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:31:16.263 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24773], 95.00th=[26346], 00:31:16.263 | 99.00th=[34866], 99.50th=[38011], 99.90th=[42206], 99.95th=[52167], 00:31:16.263 | 99.99th=[52167] 00:31:16.263 bw ( KiB/s): min= 2480, max= 2800, per=4.13%, avg=2650.32, stdev=65.75, samples=19 00:31:16.263 iops : min= 620, max= 700, avg=662.53, stdev=16.45, samples=19 00:31:16.263 lat (msec) : 10=0.06%, 20=3.99%, 50=95.87%, 100=0.08% 00:31:16.263 cpu : usr=98.50%, sys=1.10%, ctx=56, majf=0, minf=50 00:31:16.263 IO depths : 1=0.8%, 2=1.7%, 4=5.2%, 8=76.3%, 16=16.0%, 32=0.0%, >=64=0.0% 00:31:16.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.263 complete : 0=0.0%, 4=89.1%, 8=9.3%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.263 issued rwts: total=6642,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.263 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:16.263 filename1: (groupid=0, jobs=1): err= 0: pid=1171077: Wed Nov 6 14:14:53 2024 00:31:16.263 read: IOPS=666, BW=2665KiB/s (2729kB/s)(26.1MiB/10015msec) 00:31:16.263 slat (nsec): min=4118, max=72590, avg=18736.34, stdev=12044.94 00:31:16.263 clat (usec): min=15683, max=34407, avg=23864.59, stdev=840.99 00:31:16.263 lat (usec): min=15690, max=34419, avg=23883.33, stdev=840.14 00:31:16.263 clat percentiles (usec): 00:31:16.263 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:31:16.263 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:31:16.263 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:31:16.263 | 99.00th=[25035], 99.50th=[25297], 99.90th=[34341], 99.95th=[34341], 00:31:16.263 | 99.99th=[34341] 00:31:16.263 bw ( KiB/s): min= 2560, max= 2688, per=4.14%, avg=2660.68, stdev=52.80, samples=19 00:31:16.263 iops : min= 640, max= 672, avg=665.11, stdev=13.20, samples=19 00:31:16.263 lat (msec) : 20=0.42%, 50=99.58% 00:31:16.263 cpu : usr=98.99%, sys=0.75%, ctx=12, majf=0, minf=50 00:31:16.263 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:16.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.263 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.263 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.263 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:16.263 filename1: (groupid=0, jobs=1): err= 0: pid=1171078: Wed Nov 6 14:14:53 2024 00:31:16.263 read: IOPS=665, BW=2661KiB/s (2725kB/s)(26.0MiB/10006msec) 00:31:16.263 slat (nsec): min=3004, max=50147, avg=12355.44, stdev=7498.68 00:31:16.263 clat (usec): min=13210, max=45312, avg=23942.16, stdev=1580.02 00:31:16.263 lat (usec): min=13217, max=45321, avg=23954.52, stdev=1579.76 00:31:16.263 clat percentiles (usec): 00:31:16.263 | 1.00th=[20317], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:31:16.263 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:31:16.263 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:31:16.263 | 99.00th=[31065], 99.50th=[32637], 99.90th=[45351], 99.95th=[45351], 00:31:16.263 | 99.99th=[45351] 00:31:16.263 bw ( KiB/s): min= 2436, max= 2688, per=4.14%, avg=2659.79, stdev=67.28, samples=19 00:31:16.263 iops : min= 609, max= 672, avg=664.89, stdev=16.80, samples=19 00:31:16.263 lat (msec) : 20=0.87%, 50=99.13% 00:31:16.263 cpu : usr=98.05%, sys=1.39%, ctx=226, majf=0, minf=55 00:31:16.263 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:31:16.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.263 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.263 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.263 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:16.263 filename1: (groupid=0, jobs=1): err= 0: pid=1171079: Wed Nov 6 14:14:53 2024 00:31:16.263 read: IOPS=670, BW=2680KiB/s (2745kB/s)(26.2MiB/10014msec) 00:31:16.263 slat (nsec): min=5661, max=69119, avg=16724.70, stdev=11204.21 00:31:16.263 clat (usec): min=9119, max=37145, avg=23741.87, stdev=1604.57 00:31:16.263 lat (usec): min=9128, max=37151, avg=23758.59, stdev=1604.87 00:31:16.263 clat percentiles (usec): 00:31:16.263 | 1.00th=[15533], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:31:16.263 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:31:16.263 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:31:16.263 | 99.00th=[27919], 99.50th=[31327], 99.90th=[36439], 99.95th=[36439], 00:31:16.263 | 99.99th=[36963] 00:31:16.263 bw ( KiB/s): min= 2554, max= 2858, per=4.17%, avg=2675.75, stdev=63.59, samples=20 00:31:16.263 iops : min= 638, max= 714, avg=668.75, stdev=15.86, samples=20 00:31:16.263 lat (msec) : 10=0.10%, 20=2.41%, 50=97.48% 00:31:16.263 cpu : usr=98.75%, sys=0.97%, ctx=54, majf=0, minf=65 00:31:16.263 IO depths : 1=6.0%, 2=12.0%, 4=24.2%, 8=51.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:31:16.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.263 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.263 issued rwts: total=6710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.263 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:16.263 filename1: (groupid=0, jobs=1): err= 0: pid=1171080: Wed Nov 6 14:14:53 2024 00:31:16.263 read: IOPS=680, BW=2721KiB/s (2786kB/s)(26.6MiB/10017msec) 00:31:16.263 slat (nsec): min=3977, max=65638, avg=17038.84, stdev=11785.71 00:31:16.263 clat (usec): min=8152, max=40288, avg=23379.91, stdev=3359.80 00:31:16.263 lat (usec): min=8161, max=40297, avg=23396.94, stdev=3361.51 00:31:16.263 clat percentiles (usec): 00:31:16.263 | 1.00th=[13698], 5.00th=[15401], 10.00th=[18744], 20.00th=[23462], 00:31:16.263 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:31:16.263 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[27657], 00:31:16.263 | 99.00th=[33817], 99.50th=[35914], 99.90th=[40109], 99.95th=[40109], 00:31:16.263 | 99.99th=[40109] 00:31:16.263 bw ( KiB/s): min= 2560, max= 2912, per=4.24%, avg=2721.00, stdev=92.43, samples=20 00:31:16.263 iops : min= 640, max= 728, avg=680.10, stdev=23.12, samples=20 00:31:16.263 lat (msec) : 10=0.01%, 20=11.27%, 50=88.71% 00:31:16.263 cpu : usr=98.99%, sys=0.74%, ctx=33, majf=0, minf=56 00:31:16.263 IO depths : 1=2.5%, 2=6.2%, 4=18.1%, 8=63.1%, 16=10.1%, 32=0.0%, >=64=0.0% 00:31:16.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.263 complete : 0=0.0%, 4=92.4%, 8=2.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.263 issued rwts: total=6813,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.263 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:16.263 filename1: (groupid=0, jobs=1): err= 0: pid=1171081: Wed Nov 6 14:14:53 2024 00:31:16.263 read: IOPS=666, BW=2665KiB/s (2728kB/s)(26.1MiB/10016msec) 00:31:16.263 slat (nsec): min=4013, max=80014, avg=16470.10, stdev=13486.28 00:31:16.263 clat (usec): min=14014, max=35299, avg=23886.53, stdev=940.97 00:31:16.263 lat (usec): min=14021, max=35313, avg=23903.00, stdev=939.55 00:31:16.263 clat percentiles (usec): 00:31:16.263 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:31:16.263 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:31:16.263 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:31:16.263 | 99.00th=[25035], 99.50th=[25297], 99.90th=[35390], 99.95th=[35390], 00:31:16.263 | 99.99th=[35390] 00:31:16.264 bw ( KiB/s): min= 2560, max= 2688, per=4.14%, avg=2660.42, stdev=53.31, samples=19 00:31:16.264 iops : min= 640, max= 672, avg=665.05, stdev=13.31, samples=19 00:31:16.264 lat (msec) : 20=0.48%, 50=99.52% 00:31:16.264 cpu : usr=98.40%, sys=1.09%, ctx=137, majf=0, minf=85 00:31:16.264 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:16.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.264 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.264 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.264 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:16.264 filename2: (groupid=0, jobs=1): err= 0: pid=1171082: Wed Nov 6 14:14:53 2024 00:31:16.264 read: IOPS=669, BW=2678KiB/s (2742kB/s)(26.2MiB/10014msec) 00:31:16.264 slat (nsec): min=5647, max=65860, avg=9307.52, stdev=7038.34 00:31:16.264 clat (usec): min=9221, max=33709, avg=23821.24, stdev=1200.79 00:31:16.264 lat (usec): min=9231, max=33717, avg=23830.54, stdev=1200.24 00:31:16.264 clat percentiles (usec): 00:31:16.264 | 1.00th=[16909], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:31:16.264 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:31:16.264 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:31:16.264 | 99.00th=[25035], 99.50th=[25560], 99.90th=[26084], 99.95th=[33424], 00:31:16.264 | 99.99th=[33817] 00:31:16.264 bw ( KiB/s): min= 2554, max= 2821, per=4.16%, avg=2673.35, stdev=58.88, samples=20 00:31:16.264 iops : min= 638, max= 705, avg=668.15, stdev=14.78, samples=20 00:31:16.264 lat (msec) : 10=0.03%, 20=1.22%, 50=98.75% 00:31:16.264 cpu : usr=98.60%, sys=1.01%, ctx=89, majf=0, minf=74 00:31:16.264 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:16.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.264 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.264 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.264 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:16.264 filename2: (groupid=0, jobs=1): err= 0: pid=1171083: Wed Nov 6 14:14:53 2024 00:31:16.264 read: IOPS=667, BW=2671KiB/s (2735kB/s)(26.1MiB/10015msec) 00:31:16.264 slat (nsec): min=5635, max=71596, avg=18719.42, stdev=12236.94 00:31:16.264 clat (usec): min=12204, max=38711, avg=23780.72, stdev=1375.14 00:31:16.264 lat (usec): min=12211, max=38722, avg=23799.44, stdev=1375.15 00:31:16.264 clat percentiles (usec): 00:31:16.264 | 1.00th=[16319], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:31:16.264 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:31:16.264 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:31:16.264 | 99.00th=[27657], 99.50th=[30802], 99.90th=[34866], 99.95th=[34866], 00:31:16.264 | 99.99th=[38536] 00:31:16.264 bw ( KiB/s): min= 2554, max= 2816, per=4.15%, avg=2666.70, stdev=76.25, samples=20 00:31:16.264 iops : min= 638, max= 704, avg=666.50, stdev=19.16, samples=20 00:31:16.264 lat (msec) : 20=1.61%, 50=98.39% 00:31:16.264 cpu : usr=98.88%, sys=0.79%, ctx=74, majf=0, minf=58 00:31:16.264 IO depths : 1=6.0%, 2=12.1%, 4=24.5%, 8=50.8%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:16.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.264 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.264 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.264 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:16.264 filename2: (groupid=0, jobs=1): err= 0: pid=1171084: Wed Nov 6 14:14:53 2024 00:31:16.264 read: IOPS=666, BW=2666KiB/s (2730kB/s)(26.1MiB/10011msec) 00:31:16.264 slat (nsec): min=3867, max=71150, avg=21918.20, stdev=12197.70 00:31:16.264 clat (usec): min=14179, max=46604, avg=23805.36, stdev=997.55 00:31:16.264 lat (usec): min=14186, max=46613, avg=23827.27, stdev=997.91 00:31:16.264 clat percentiles (usec): 00:31:16.264 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:31:16.264 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:31:16.264 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24511], 00:31:16.264 | 99.00th=[25035], 99.50th=[28181], 99.90th=[33817], 99.95th=[34341], 00:31:16.264 | 99.99th=[46400] 00:31:16.264 bw ( KiB/s): min= 2554, max= 2688, per=4.14%, avg=2660.74, stdev=54.26, samples=19 00:31:16.264 iops : min= 638, max= 672, avg=665.16, stdev=13.62, samples=19 00:31:16.264 lat (msec) : 20=0.60%, 50=99.40% 00:31:16.264 cpu : usr=98.87%, sys=0.79%, ctx=79, majf=0, minf=45 00:31:16.264 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:31:16.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.264 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.264 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.264 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:16.264 filename2: (groupid=0, jobs=1): err= 0: pid=1171085: Wed Nov 6 14:14:53 2024 00:31:16.264 read: IOPS=666, BW=2665KiB/s (2729kB/s)(26.0MiB/10007msec) 00:31:16.264 slat (nsec): min=3126, max=67999, avg=19568.75, stdev=11939.72 00:31:16.264 clat (usec): min=14028, max=48772, avg=23836.53, stdev=1621.07 00:31:16.264 lat (usec): min=14035, max=48781, avg=23856.10, stdev=1620.99 00:31:16.264 clat percentiles (usec): 00:31:16.264 | 1.00th=[18482], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:31:16.264 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:31:16.264 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:31:16.264 | 99.00th=[28967], 99.50th=[31065], 99.90th=[45876], 99.95th=[45876], 00:31:16.264 | 99.99th=[49021] 00:31:16.264 bw ( KiB/s): min= 2432, max= 2736, per=4.14%, avg=2658.74, stdev=74.93, samples=19 00:31:16.264 iops : min= 608, max= 684, avg=664.63, stdev=18.76, samples=19 00:31:16.264 lat (msec) : 20=1.87%, 50=98.13% 00:31:16.264 cpu : usr=98.54%, sys=1.06%, ctx=121, majf=0, minf=56 00:31:16.264 IO depths : 1=4.1%, 2=10.2%, 4=24.3%, 8=52.9%, 16=8.4%, 32=0.0%, >=64=0.0% 00:31:16.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.264 complete : 0=0.0%, 4=94.1%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.264 issued rwts: total=6668,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.264 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:16.264 filename2: (groupid=0, jobs=1): err= 0: pid=1171086: Wed Nov 6 14:14:53 2024 00:31:16.264 read: IOPS=666, BW=2666KiB/s (2730kB/s)(26.1MiB/10011msec) 00:31:16.264 slat (nsec): min=3340, max=73474, avg=22751.29, stdev=12636.12 00:31:16.264 clat (usec): min=12365, max=40159, avg=23793.65, stdev=822.72 00:31:16.264 lat (usec): min=12371, max=40169, avg=23816.40, stdev=823.00 00:31:16.264 clat percentiles (usec): 00:31:16.264 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:31:16.264 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:31:16.264 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24511], 00:31:16.264 | 99.00th=[25035], 99.50th=[25035], 99.90th=[30016], 99.95th=[32637], 00:31:16.264 | 99.99th=[40109] 00:31:16.264 bw ( KiB/s): min= 2554, max= 2688, per=4.14%, avg=2660.74, stdev=54.26, samples=19 00:31:16.264 iops : min= 638, max= 672, avg=665.16, stdev=13.62, samples=19 00:31:16.264 lat (msec) : 20=0.42%, 50=99.58% 00:31:16.264 cpu : usr=98.77%, sys=0.89%, ctx=72, majf=0, minf=46 00:31:16.264 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:16.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.264 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.264 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.264 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:16.264 filename2: (groupid=0, jobs=1): err= 0: pid=1171087: Wed Nov 6 14:14:53 2024 00:31:16.264 read: IOPS=648, BW=2595KiB/s (2657kB/s)(25.4MiB/10005msec) 00:31:16.264 slat (nsec): min=4208, max=76764, avg=15096.24, stdev=12357.34 00:31:16.264 clat (usec): min=10473, max=45970, avg=24602.03, stdev=3323.32 00:31:16.264 lat (usec): min=10479, max=45983, avg=24617.12, stdev=3323.22 00:31:16.264 clat percentiles (usec): 00:31:16.264 | 1.00th=[16909], 5.00th=[19792], 10.00th=[22938], 20.00th=[23462], 00:31:16.264 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[24249], 00:31:16.264 | 70.00th=[24249], 80.00th=[24773], 90.00th=[28443], 95.00th=[31851], 00:31:16.264 | 99.00th=[36439], 99.50th=[38536], 99.90th=[45876], 99.95th=[45876], 00:31:16.264 | 99.99th=[45876] 00:31:16.264 bw ( KiB/s): min= 2404, max= 2736, per=4.03%, avg=2590.47, stdev=80.80, samples=19 00:31:16.264 iops : min= 601, max= 684, avg=647.58, stdev=20.20, samples=19 00:31:16.264 lat (msec) : 20=5.44%, 50=94.56% 00:31:16.264 cpu : usr=98.78%, sys=0.86%, ctx=82, majf=0, minf=92 00:31:16.264 IO depths : 1=0.1%, 2=0.1%, 4=2.7%, 8=80.1%, 16=17.0%, 32=0.0%, >=64=0.0% 00:31:16.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.264 complete : 0=0.0%, 4=89.5%, 8=9.1%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.264 issued rwts: total=6490,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.264 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:16.264 filename2: (groupid=0, jobs=1): err= 0: pid=1171088: Wed Nov 6 14:14:53 2024 00:31:16.264 read: IOPS=665, BW=2662KiB/s (2726kB/s)(26.0MiB/10002msec) 00:31:16.264 slat (nsec): min=2909, max=68086, avg=19415.06, stdev=12896.35 00:31:16.264 clat (usec): min=12362, max=46809, avg=23884.42, stdev=888.82 00:31:16.264 lat (usec): min=12370, max=46818, avg=23903.83, stdev=887.50 00:31:16.264 clat percentiles (usec): 00:31:16.264 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:31:16.264 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:31:16.264 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24511], 00:31:16.264 | 99.00th=[25035], 99.50th=[25035], 99.90th=[36963], 99.95th=[36963], 00:31:16.264 | 99.99th=[46924] 00:31:16.264 bw ( KiB/s): min= 2560, max= 2688, per=4.14%, avg=2660.74, stdev=53.46, samples=19 00:31:16.264 iops : min= 640, max= 672, avg=665.16, stdev=13.36, samples=19 00:31:16.264 lat (msec) : 20=0.09%, 50=99.91% 00:31:16.264 cpu : usr=98.82%, sys=0.83%, ctx=69, majf=0, minf=44 00:31:16.264 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:16.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.264 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.264 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.264 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:16.264 filename2: (groupid=0, jobs=1): err= 0: pid=1171089: Wed Nov 6 14:14:53 2024 00:31:16.264 read: IOPS=669, BW=2678KiB/s (2742kB/s)(26.2MiB/10015msec) 00:31:16.264 slat (nsec): min=5663, max=60869, avg=12115.69, stdev=9224.42 00:31:16.264 clat (usec): min=10642, max=33342, avg=23802.54, stdev=1224.65 00:31:16.264 lat (usec): min=10653, max=33349, avg=23814.66, stdev=1223.70 00:31:16.264 clat percentiles (usec): 00:31:16.265 | 1.00th=[21103], 5.00th=[22938], 10.00th=[23462], 20.00th=[23462], 00:31:16.265 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:31:16.265 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:31:16.265 | 99.00th=[25035], 99.50th=[25560], 99.90th=[26084], 99.95th=[26608], 00:31:16.265 | 99.99th=[33424] 00:31:16.265 bw ( KiB/s): min= 2554, max= 2816, per=4.16%, avg=2673.10, stdev=72.08, samples=20 00:31:16.265 iops : min= 638, max= 704, avg=668.10, stdev=18.14, samples=20 00:31:16.265 lat (msec) : 20=0.98%, 50=99.02% 00:31:16.265 cpu : usr=98.83%, sys=0.82%, ctx=59, majf=0, minf=39 00:31:16.265 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:16.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.265 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.265 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.265 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:16.265 00:31:16.265 Run status group 0 (all jobs): 00:31:16.265 READ: bw=62.7MiB/s (65.8MB/s), 2595KiB/s-2806KiB/s (2657kB/s-2873kB/s), io=629MiB (659MB), run=10002-10023msec 00:31:16.265 14:14:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:16.265 14:14:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:16.265 14:14:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:16.265 14:14:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:16.265 14:14:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:16.265 14:14:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:16.265 14:14:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.265 14:14:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:16.265 14:14:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.265 14:14:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:16.265 14:14:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.265 14:14:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:16.265 14:14:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.265 14:14:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:16.265 14:14:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:16.265 14:14:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:16.265 14:14:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:16.265 14:14:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.265 14:14:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:16.265 14:14:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.265 14:14:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:16.265 14:14:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:16.265 bdev_null0 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:16.265 [2024-11-06 14:14:54.052252] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:16.265 bdev_null1 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:16.265 { 00:31:16.265 "params": { 00:31:16.265 "name": "Nvme$subsystem", 00:31:16.265 "trtype": "$TEST_TRANSPORT", 00:31:16.265 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:16.265 "adrfam": "ipv4", 00:31:16.265 "trsvcid": "$NVMF_PORT", 00:31:16.265 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:16.265 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:16.265 "hdgst": ${hdgst:-false}, 00:31:16.265 "ddgst": ${ddgst:-false} 00:31:16.265 }, 00:31:16.265 "method": "bdev_nvme_attach_controller" 00:31:16.265 } 00:31:16.265 EOF 00:31:16.265 )") 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:31:16.265 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:16.266 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:16.266 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:16.266 14:14:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:16.266 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:31:16.266 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:16.266 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:31:16.266 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:16.266 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:16.266 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:16.266 14:14:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:16.266 14:14:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:16.266 { 00:31:16.266 "params": { 00:31:16.266 "name": "Nvme$subsystem", 00:31:16.266 "trtype": "$TEST_TRANSPORT", 00:31:16.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:16.266 "adrfam": "ipv4", 00:31:16.266 "trsvcid": "$NVMF_PORT", 00:31:16.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:16.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:16.266 "hdgst": ${hdgst:-false}, 00:31:16.266 "ddgst": ${ddgst:-false} 00:31:16.266 }, 00:31:16.266 "method": "bdev_nvme_attach_controller" 00:31:16.266 } 00:31:16.266 EOF 00:31:16.266 )") 00:31:16.266 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:16.266 14:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:16.266 14:14:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:16.266 14:14:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:31:16.266 14:14:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:31:16.266 14:14:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:16.266 "params": { 00:31:16.266 "name": "Nvme0", 00:31:16.266 "trtype": "tcp", 00:31:16.266 "traddr": "10.0.0.2", 00:31:16.266 "adrfam": "ipv4", 00:31:16.266 "trsvcid": "4420", 00:31:16.266 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:16.266 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:16.266 "hdgst": false, 00:31:16.266 "ddgst": false 00:31:16.266 }, 00:31:16.266 "method": "bdev_nvme_attach_controller" 00:31:16.266 },{ 00:31:16.266 "params": { 00:31:16.266 "name": "Nvme1", 00:31:16.266 "trtype": "tcp", 00:31:16.266 "traddr": "10.0.0.2", 00:31:16.266 "adrfam": "ipv4", 00:31:16.266 "trsvcid": "4420", 00:31:16.266 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:16.266 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:16.266 "hdgst": false, 00:31:16.266 "ddgst": false 00:31:16.266 }, 00:31:16.266 "method": "bdev_nvme_attach_controller" 00:31:16.266 }' 00:31:16.266 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:31:16.266 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:31:16.266 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:31:16.266 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:31:16.266 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:31:16.266 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:16.266 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:31:16.266 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:31:16.266 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:16.266 14:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:16.266 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:16.266 ... 00:31:16.266 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:16.266 ... 00:31:16.266 fio-3.35 00:31:16.266 Starting 4 threads 00:31:21.539 00:31:21.539 filename0: (groupid=0, jobs=1): err= 0: pid=1173861: Wed Nov 6 14:15:00 2024 00:31:21.539 read: IOPS=2901, BW=22.7MiB/s (23.8MB/s)(113MiB/5002msec) 00:31:21.539 slat (nsec): min=2775, max=27432, avg=5977.27, stdev=1591.08 00:31:21.539 clat (usec): min=1159, max=42819, avg=2740.88, stdev=968.32 00:31:21.539 lat (usec): min=1164, max=42829, avg=2746.86, stdev=968.27 00:31:21.539 clat percentiles (usec): 00:31:21.539 | 1.00th=[ 2114], 5.00th=[ 2409], 10.00th=[ 2540], 20.00th=[ 2671], 00:31:21.539 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2704], 00:31:21.539 | 70.00th=[ 2737], 80.00th=[ 2737], 90.00th=[ 2900], 95.00th=[ 3064], 00:31:21.539 | 99.00th=[ 3687], 99.50th=[ 3884], 99.90th=[ 4228], 99.95th=[42730], 00:31:21.539 | 99.99th=[42730] 00:31:21.539 bw ( KiB/s): min=21488, max=23552, per=24.70%, avg=23207.11, stdev=660.04, samples=9 00:31:21.539 iops : min= 2686, max= 2944, avg=2900.89, stdev=82.51, samples=9 00:31:21.539 lat (msec) : 2=0.52%, 4=99.15%, 10=0.28%, 50=0.06% 00:31:21.539 cpu : usr=97.22%, sys=2.56%, ctx=6, majf=0, minf=29 00:31:21.539 IO depths : 1=0.1%, 2=0.2%, 4=72.9%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:21.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.539 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.539 issued rwts: total=14512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:21.539 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:21.539 filename0: (groupid=0, jobs=1): err= 0: pid=1173862: Wed Nov 6 14:15:00 2024 00:31:21.539 read: IOPS=2937, BW=22.9MiB/s (24.1MB/s)(115MiB/5001msec) 00:31:21.539 slat (nsec): min=2760, max=28605, avg=5966.45, stdev=1476.22 00:31:21.539 clat (usec): min=1041, max=4832, avg=2706.61, stdev=224.78 00:31:21.539 lat (usec): min=1046, max=4837, avg=2712.57, stdev=224.80 00:31:21.539 clat percentiles (usec): 00:31:21.539 | 1.00th=[ 1876], 5.00th=[ 2474], 10.00th=[ 2573], 20.00th=[ 2671], 00:31:21.539 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2704], 00:31:21.539 | 70.00th=[ 2704], 80.00th=[ 2737], 90.00th=[ 2900], 95.00th=[ 2966], 00:31:21.539 | 99.00th=[ 3490], 99.50th=[ 3916], 99.90th=[ 4293], 99.95th=[ 4359], 00:31:21.539 | 99.99th=[ 4817] 00:31:21.539 bw ( KiB/s): min=23280, max=24336, per=25.00%, avg=23484.44, stdev=335.22, samples=9 00:31:21.539 iops : min= 2910, max= 3042, avg=2935.56, stdev=41.90, samples=9 00:31:21.539 lat (msec) : 2=1.23%, 4=98.39%, 10=0.39% 00:31:21.539 cpu : usr=97.20%, sys=2.58%, ctx=6, majf=0, minf=49 00:31:21.539 IO depths : 1=0.1%, 2=0.1%, 4=73.6%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:21.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.539 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.539 issued rwts: total=14691,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:21.539 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:21.539 filename1: (groupid=0, jobs=1): err= 0: pid=1173864: Wed Nov 6 14:15:00 2024 00:31:21.539 read: IOPS=2911, BW=22.7MiB/s (23.8MB/s)(114MiB/5001msec) 00:31:21.539 slat (nsec): min=2789, max=28525, avg=6071.20, stdev=1843.64 00:31:21.539 clat (usec): min=1488, max=45258, avg=2731.61, stdev=1014.86 00:31:21.539 lat (usec): min=1493, max=45268, avg=2737.68, stdev=1014.82 00:31:21.539 clat percentiles (usec): 00:31:21.539 | 1.00th=[ 2180], 5.00th=[ 2442], 10.00th=[ 2573], 20.00th=[ 2671], 00:31:21.539 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2704], 00:31:21.539 | 70.00th=[ 2737], 80.00th=[ 2737], 90.00th=[ 2900], 95.00th=[ 2966], 00:31:21.539 | 99.00th=[ 3490], 99.50th=[ 3785], 99.90th=[ 4293], 99.95th=[45351], 00:31:21.539 | 99.99th=[45351] 00:31:21.540 bw ( KiB/s): min=21488, max=23568, per=24.78%, avg=23285.33, stdev=675.66, samples=9 00:31:21.540 iops : min= 2686, max= 2946, avg=2910.67, stdev=84.46, samples=9 00:31:21.540 lat (msec) : 2=0.36%, 4=99.33%, 10=0.26%, 50=0.05% 00:31:21.540 cpu : usr=97.14%, sys=2.66%, ctx=6, majf=0, minf=63 00:31:21.540 IO depths : 1=0.1%, 2=0.2%, 4=72.6%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:21.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.540 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.540 issued rwts: total=14559,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:21.540 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:21.540 filename1: (groupid=0, jobs=1): err= 0: pid=1173865: Wed Nov 6 14:15:00 2024 00:31:21.540 read: IOPS=2995, BW=23.4MiB/s (24.5MB/s)(117MiB/5002msec) 00:31:21.540 slat (nsec): min=2767, max=25957, avg=6045.46, stdev=1795.15 00:31:21.540 clat (usec): min=846, max=4554, avg=2655.49, stdev=274.64 00:31:21.540 lat (usec): min=852, max=4560, avg=2661.54, stdev=274.64 00:31:21.540 clat percentiles (usec): 00:31:21.540 | 1.00th=[ 1549], 5.00th=[ 2212], 10.00th=[ 2376], 20.00th=[ 2573], 00:31:21.540 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2704], 00:31:21.540 | 70.00th=[ 2704], 80.00th=[ 2737], 90.00th=[ 2802], 95.00th=[ 2966], 00:31:21.540 | 99.00th=[ 3556], 99.50th=[ 3687], 99.90th=[ 4015], 99.95th=[ 4359], 00:31:21.540 | 99.99th=[ 4555] 00:31:21.540 bw ( KiB/s): min=23680, max=25154, per=25.54%, avg=23998.44, stdev=447.60, samples=9 00:31:21.540 iops : min= 2960, max= 3144, avg=2999.78, stdev=55.87, samples=9 00:31:21.540 lat (usec) : 1000=0.05% 00:31:21.540 lat (msec) : 2=2.32%, 4=97.53%, 10=0.10% 00:31:21.540 cpu : usr=97.32%, sys=2.44%, ctx=5, majf=0, minf=41 00:31:21.540 IO depths : 1=0.1%, 2=0.3%, 4=69.4%, 8=30.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:21.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.540 complete : 0=0.0%, 4=94.5%, 8=5.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.540 issued rwts: total=14981,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:21.540 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:21.540 00:31:21.540 Run status group 0 (all jobs): 00:31:21.540 READ: bw=91.7MiB/s (96.2MB/s), 22.7MiB/s-23.4MiB/s (23.8MB/s-24.5MB/s), io=459MiB (481MB), run=5001-5002msec 00:31:21.540 14:15:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:21.540 14:15:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:21.540 14:15:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:21.540 14:15:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:21.540 14:15:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:21.540 14:15:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:21.540 14:15:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.540 14:15:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:21.540 14:15:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.540 14:15:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:21.540 14:15:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.540 14:15:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:21.540 14:15:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.540 14:15:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:21.540 14:15:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:21.540 14:15:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:21.540 14:15:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:21.540 14:15:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.540 14:15:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:21.540 14:15:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.540 14:15:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:21.540 14:15:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.540 14:15:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:21.540 14:15:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.540 00:31:21.540 real 0m24.056s 00:31:21.540 user 5m5.843s 00:31:21.540 sys 0m3.962s 00:31:21.540 14:15:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:21.540 14:15:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:21.540 ************************************ 00:31:21.540 END TEST fio_dif_rand_params 00:31:21.540 ************************************ 00:31:21.540 14:15:00 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:21.540 14:15:00 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:31:21.540 14:15:00 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:21.540 14:15:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:21.540 ************************************ 00:31:21.540 START TEST fio_dif_digest 00:31:21.540 ************************************ 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:21.540 bdev_null0 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:21.540 [2024-11-06 14:15:00.289448] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:21.540 { 00:31:21.540 "params": { 00:31:21.540 "name": "Nvme$subsystem", 00:31:21.540 "trtype": "$TEST_TRANSPORT", 00:31:21.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:21.540 "adrfam": "ipv4", 00:31:21.540 "trsvcid": "$NVMF_PORT", 00:31:21.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:21.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:21.540 "hdgst": ${hdgst:-false}, 00:31:21.540 "ddgst": ${ddgst:-false} 00:31:21.540 }, 00:31:21.540 "method": "bdev_nvme_attach_controller" 00:31:21.540 } 00:31:21.540 EOF 00:31:21.540 )") 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:21.540 14:15:00 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:21.541 14:15:00 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:21.541 14:15:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:31:21.541 14:15:00 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:21.541 14:15:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:21.541 14:15:00 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:21.541 14:15:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:31:21.541 14:15:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:31:21.541 14:15:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:31:21.541 14:15:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:31:21.541 14:15:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:21.541 "params": { 00:31:21.541 "name": "Nvme0", 00:31:21.541 "trtype": "tcp", 00:31:21.541 "traddr": "10.0.0.2", 00:31:21.541 "adrfam": "ipv4", 00:31:21.541 "trsvcid": "4420", 00:31:21.541 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:21.541 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:21.541 "hdgst": true, 00:31:21.541 "ddgst": true 00:31:21.541 }, 00:31:21.541 "method": "bdev_nvme_attach_controller" 00:31:21.541 }' 00:31:21.541 14:15:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:31:21.541 14:15:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:31:21.541 14:15:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:31:21.541 14:15:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:21.541 14:15:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:31:21.541 14:15:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:31:21.541 14:15:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:31:21.541 14:15:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:31:21.541 14:15:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:21.541 14:15:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:21.541 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:21.541 ... 00:31:21.541 fio-3.35 00:31:21.541 Starting 3 threads 00:31:33.743 00:31:33.743 filename0: (groupid=0, jobs=1): err= 0: pid=1175308: Wed Nov 6 14:15:11 2024 00:31:33.743 read: IOPS=145, BW=18.2MiB/s (19.1MB/s)(182MiB/10020msec) 00:31:33.743 slat (nsec): min=3051, max=49848, avg=6849.32, stdev=1622.35 00:31:33.743 clat (usec): min=8533, max=94451, avg=20605.62, stdev=18607.58 00:31:33.743 lat (usec): min=8541, max=94457, avg=20612.47, stdev=18607.54 00:31:33.743 clat percentiles (usec): 00:31:33.743 | 1.00th=[ 8979], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10290], 00:31:33.743 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11076], 60.00th=[11469], 00:31:33.743 | 70.00th=[12125], 80.00th=[50070], 90.00th=[51643], 95.00th=[52691], 00:31:33.743 | 99.00th=[91751], 99.50th=[92799], 99.90th=[93848], 99.95th=[94897], 00:31:33.743 | 99.99th=[94897] 00:31:33.743 bw ( KiB/s): min=12288, max=25907, per=16.79%, avg=18626.55, stdev=3741.75, samples=20 00:31:33.743 iops : min= 96, max= 202, avg=145.50, stdev=29.19, samples=20 00:31:33.743 lat (msec) : 10=14.20%, 20=63.03%, 50=1.30%, 100=21.47% 00:31:33.743 cpu : usr=96.60%, sys=3.16%, ctx=20, majf=0, minf=128 00:31:33.743 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:33.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.743 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.743 issued rwts: total=1458,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.743 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:33.743 filename0: (groupid=0, jobs=1): err= 0: pid=1175309: Wed Nov 6 14:15:11 2024 00:31:33.743 read: IOPS=325, BW=40.6MiB/s (42.6MB/s)(407MiB/10005msec) 00:31:33.743 slat (nsec): min=3008, max=31452, avg=6466.65, stdev=904.62 00:31:33.743 clat (usec): min=5575, max=14126, avg=9217.86, stdev=1367.19 00:31:33.743 lat (usec): min=5581, max=14135, avg=9224.33, stdev=1367.23 00:31:33.743 clat percentiles (usec): 00:31:33.743 | 1.00th=[ 6915], 5.00th=[ 7308], 10.00th=[ 7570], 20.00th=[ 7898], 00:31:33.743 | 30.00th=[ 8225], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9634], 00:31:33.743 | 70.00th=[10159], 80.00th=[10552], 90.00th=[11076], 95.00th=[11469], 00:31:33.743 | 99.00th=[12256], 99.50th=[12649], 99.90th=[13173], 99.95th=[14091], 00:31:33.743 | 99.99th=[14091] 00:31:33.743 bw ( KiB/s): min=39168, max=44032, per=37.58%, avg=41687.58, stdev=1423.46, samples=19 00:31:33.743 iops : min= 306, max= 344, avg=325.68, stdev=11.12, samples=19 00:31:33.743 lat (msec) : 10=67.78%, 20=32.22% 00:31:33.743 cpu : usr=96.45%, sys=3.31%, ctx=18, majf=0, minf=141 00:31:33.743 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:33.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.743 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.743 issued rwts: total=3253,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.743 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:33.743 filename0: (groupid=0, jobs=1): err= 0: pid=1175310: Wed Nov 6 14:15:11 2024 00:31:33.743 read: IOPS=397, BW=49.7MiB/s (52.1MB/s)(499MiB/10044msec) 00:31:33.743 slat (nsec): min=2948, max=26399, avg=6447.62, stdev=811.77 00:31:33.743 clat (usec): min=5067, max=47265, avg=7527.81, stdev=1461.90 00:31:33.743 lat (usec): min=5073, max=47271, avg=7534.25, stdev=1461.94 00:31:33.743 clat percentiles (usec): 00:31:33.743 | 1.00th=[ 5538], 5.00th=[ 5932], 10.00th=[ 6128], 20.00th=[ 6390], 00:31:33.743 | 30.00th=[ 6652], 40.00th=[ 6915], 50.00th=[ 7308], 60.00th=[ 7898], 00:31:33.743 | 70.00th=[ 8291], 80.00th=[ 8717], 90.00th=[ 9110], 95.00th=[ 9372], 00:31:33.743 | 99.00th=[ 9896], 99.50th=[10159], 99.90th=[13698], 99.95th=[46400], 00:31:33.743 | 99.99th=[47449] 00:31:33.743 bw ( KiB/s): min=47104, max=54272, per=46.05%, avg=51084.80, stdev=1701.11, samples=20 00:31:33.743 iops : min= 368, max= 424, avg=399.10, stdev=13.29, samples=20 00:31:33.743 lat (msec) : 10=99.15%, 20=0.80%, 50=0.05% 00:31:33.743 cpu : usr=96.37%, sys=3.40%, ctx=11, majf=0, minf=141 00:31:33.743 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:33.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.743 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.743 issued rwts: total=3993,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.743 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:33.743 00:31:33.743 Run status group 0 (all jobs): 00:31:33.743 READ: bw=108MiB/s (114MB/s), 18.2MiB/s-49.7MiB/s (19.1MB/s-52.1MB/s), io=1088MiB (1141MB), run=10005-10044msec 00:31:33.743 14:15:11 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:33.743 14:15:11 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:33.743 14:15:11 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:33.743 14:15:11 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:33.743 14:15:11 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:33.743 14:15:11 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:33.743 14:15:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.743 14:15:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:33.743 14:15:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.743 14:15:11 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:33.743 14:15:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.743 14:15:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:33.743 14:15:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.743 00:31:33.743 real 0m11.018s 00:31:33.743 user 0m39.537s 00:31:33.743 sys 0m1.267s 00:31:33.743 14:15:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:33.743 14:15:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:33.743 ************************************ 00:31:33.743 END TEST fio_dif_digest 00:31:33.743 ************************************ 00:31:33.743 14:15:11 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:33.743 14:15:11 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:33.743 14:15:11 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:33.743 14:15:11 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:31:33.743 14:15:11 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:33.743 14:15:11 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:31:33.743 14:15:11 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:33.743 14:15:11 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:33.743 rmmod nvme_tcp 00:31:33.743 rmmod nvme_fabrics 00:31:33.743 rmmod nvme_keyring 00:31:33.743 14:15:11 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:33.743 14:15:11 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:31:33.743 14:15:11 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:31:33.744 14:15:11 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1163677 ']' 00:31:33.744 14:15:11 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1163677 00:31:33.744 14:15:11 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 1163677 ']' 00:31:33.744 14:15:11 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 1163677 00:31:33.744 14:15:11 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:31:33.744 14:15:11 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:33.744 14:15:11 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1163677 00:31:33.744 14:15:11 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:33.744 14:15:11 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:33.744 14:15:11 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1163677' 00:31:33.744 killing process with pid 1163677 00:31:33.744 14:15:11 nvmf_dif -- common/autotest_common.sh@971 -- # kill 1163677 00:31:33.744 14:15:11 nvmf_dif -- common/autotest_common.sh@976 -- # wait 1163677 00:31:33.744 14:15:11 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:31:33.744 14:15:11 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:34.312 Waiting for block devices as requested 00:31:34.312 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:34.312 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:34.571 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:34.571 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:34.571 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:34.571 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:34.830 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:34.830 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:34.830 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:35.089 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:35.089 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:35.089 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:35.089 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:35.089 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:35.348 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:35.348 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:35.348 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:35.348 14:15:14 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:35.348 14:15:14 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:35.348 14:15:14 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:31:35.348 14:15:14 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:31:35.348 14:15:14 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:31:35.348 14:15:14 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:35.348 14:15:14 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:35.348 14:15:14 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:35.348 14:15:14 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:35.348 14:15:14 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:35.348 14:15:14 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:37.885 14:15:16 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:37.885 00:31:37.885 real 1m11.942s 00:31:37.885 user 7m41.215s 00:31:37.885 sys 0m17.010s 00:31:37.885 14:15:16 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:37.885 14:15:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:37.885 ************************************ 00:31:37.885 END TEST nvmf_dif 00:31:37.885 ************************************ 00:31:37.885 14:15:16 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:37.885 14:15:16 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:31:37.885 14:15:16 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:37.885 14:15:16 -- common/autotest_common.sh@10 -- # set +x 00:31:37.885 ************************************ 00:31:37.885 START TEST nvmf_abort_qd_sizes 00:31:37.885 ************************************ 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:37.885 * Looking for test storage... 00:31:37.885 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:37.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.885 --rc genhtml_branch_coverage=1 00:31:37.885 --rc genhtml_function_coverage=1 00:31:37.885 --rc genhtml_legend=1 00:31:37.885 --rc geninfo_all_blocks=1 00:31:37.885 --rc geninfo_unexecuted_blocks=1 00:31:37.885 00:31:37.885 ' 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:37.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.885 --rc genhtml_branch_coverage=1 00:31:37.885 --rc genhtml_function_coverage=1 00:31:37.885 --rc genhtml_legend=1 00:31:37.885 --rc geninfo_all_blocks=1 00:31:37.885 --rc geninfo_unexecuted_blocks=1 00:31:37.885 00:31:37.885 ' 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:37.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.885 --rc genhtml_branch_coverage=1 00:31:37.885 --rc genhtml_function_coverage=1 00:31:37.885 --rc genhtml_legend=1 00:31:37.885 --rc geninfo_all_blocks=1 00:31:37.885 --rc geninfo_unexecuted_blocks=1 00:31:37.885 00:31:37.885 ' 00:31:37.885 14:15:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:37.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.885 --rc genhtml_branch_coverage=1 00:31:37.885 --rc genhtml_function_coverage=1 00:31:37.885 --rc genhtml_legend=1 00:31:37.885 --rc geninfo_all_blocks=1 00:31:37.885 --rc geninfo_unexecuted_blocks=1 00:31:37.885 00:31:37.886 ' 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:37.886 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:31:37.886 14:15:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:43.159 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:43.159 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:43.159 Found net devices under 0000:31:00.0: cvl_0_0 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:43.159 Found net devices under 0000:31:00.1: cvl_0_1 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:43.159 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:43.160 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:43.160 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:43.160 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:43.160 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:43.160 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:43.160 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:43.160 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:43.160 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:43.160 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:43.160 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:43.160 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:43.160 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:43.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:43.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.570 ms 00:31:43.160 00:31:43.160 --- 10.0.0.2 ping statistics --- 00:31:43.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:43.160 rtt min/avg/max/mdev = 0.570/0.570/0.570/0.000 ms 00:31:43.160 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:43.160 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:43.160 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:31:43.160 00:31:43.160 --- 10.0.0.1 ping statistics --- 00:31:43.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:43.160 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:31:43.160 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:43.160 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:31:43.160 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:31:43.160 14:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:45.064 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:45.065 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:45.065 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:45.065 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:45.065 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:45.065 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:45.065 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:45.065 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:45.065 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:45.065 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:45.065 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:45.065 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:45.065 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:45.065 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:45.065 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:45.065 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:45.065 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:31:45.065 14:15:24 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:45.065 14:15:24 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:45.065 14:15:24 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:45.065 14:15:24 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:45.065 14:15:24 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:45.065 14:15:24 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:45.065 14:15:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:45.065 14:15:24 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:45.065 14:15:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:45.065 14:15:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:45.065 14:15:24 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1185305 00:31:45.065 14:15:24 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1185305 00:31:45.065 14:15:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 1185305 ']' 00:31:45.065 14:15:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:45.065 14:15:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:45.065 14:15:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:45.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:45.065 14:15:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:45.065 14:15:24 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:45.065 14:15:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:45.324 [2024-11-06 14:15:24.354361] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:31:45.324 [2024-11-06 14:15:24.354407] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:45.324 [2024-11-06 14:15:24.438495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:45.324 [2024-11-06 14:15:24.475898] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:45.324 [2024-11-06 14:15:24.475933] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:45.324 [2024-11-06 14:15:24.475942] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:45.324 [2024-11-06 14:15:24.475948] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:45.324 [2024-11-06 14:15:24.475954] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:45.324 [2024-11-06 14:15:24.477487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:45.324 [2024-11-06 14:15:24.477642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:45.324 [2024-11-06 14:15:24.477755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:45.324 [2024-11-06 14:15:24.477756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:45.892 14:15:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:45.892 14:15:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:31:45.892 14:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:45.892 14:15:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:45.892 14:15:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:46.151 14:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:46.151 14:15:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:46.151 14:15:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:46.151 14:15:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:46.151 14:15:25 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:31:46.151 14:15:25 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:31:46.151 14:15:25 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:31:46.151 14:15:25 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:31:46.151 14:15:25 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:31:46.151 14:15:25 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:31:46.151 14:15:25 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:31:46.151 14:15:25 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:31:46.151 14:15:25 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:31:46.151 14:15:25 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:31:46.151 14:15:25 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:31:46.151 14:15:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:31:46.151 14:15:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:31:46.151 14:15:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:46.151 14:15:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:31:46.151 14:15:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:46.151 14:15:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:46.151 ************************************ 00:31:46.151 START TEST spdk_target_abort 00:31:46.151 ************************************ 00:31:46.151 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:31:46.151 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:46.151 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:31:46.151 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.151 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:46.410 spdk_targetn1 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:46.410 [2024-11-06 14:15:25.530802] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:46.410 [2024-11-06 14:15:25.567068] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:46.410 14:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:46.410 [2024-11-06 14:15:25.680220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:32 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:31:46.410 [2024-11-06 14:15:25.680258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0006 p:1 m:0 dnr:0 00:31:46.410 [2024-11-06 14:15:25.687842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:272 len:8 PRP1 0x200004ac8000 PRP2 0x0 00:31:46.410 [2024-11-06 14:15:25.687863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0024 p:1 m:0 dnr:0 00:31:46.668 [2024-11-06 14:15:25.703797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:832 len:8 PRP1 0x200004abe000 PRP2 0x0 00:31:46.668 [2024-11-06 14:15:25.703817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:006a p:1 m:0 dnr:0 00:31:46.668 [2024-11-06 14:15:25.723931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1432 len:8 PRP1 0x200004ac8000 PRP2 0x0 00:31:46.668 [2024-11-06 14:15:25.723951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00b4 p:1 m:0 dnr:0 00:31:46.668 [2024-11-06 14:15:25.738689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1944 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:31:46.668 [2024-11-06 14:15:25.738708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00f5 p:1 m:0 dnr:0 00:31:46.668 [2024-11-06 14:15:25.763766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2896 len:8 PRP1 0x200004abe000 PRP2 0x0 00:31:46.668 [2024-11-06 14:15:25.763787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:46.668 [2024-11-06 14:15:25.771558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3160 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:31:46.668 [2024-11-06 14:15:25.771577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:008c p:0 m:0 dnr:0 00:31:46.668 [2024-11-06 14:15:25.795740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3984 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:31:46.668 [2024-11-06 14:15:25.795760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00f4 p:0 m:0 dnr:0 00:31:49.956 Initializing NVMe Controllers 00:31:49.956 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:49.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:49.956 Initialization complete. Launching workers. 00:31:49.956 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 14305, failed: 8 00:31:49.956 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3332, failed to submit 10981 00:31:49.956 success 770, unsuccessful 2562, failed 0 00:31:49.956 14:15:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:49.956 14:15:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:49.956 [2024-11-06 14:15:28.888073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:176 nsid:1 lba:608 len:8 PRP1 0x200004e5c000 PRP2 0x0 00:31:49.956 [2024-11-06 14:15:28.888104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:176 cdw0:0 sqhd:0057 p:1 m:0 dnr:0 00:31:49.956 [2024-11-06 14:15:28.919121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:185 nsid:1 lba:1424 len:8 PRP1 0x200004e4c000 PRP2 0x0 00:31:49.956 [2024-11-06 14:15:28.919142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:185 cdw0:0 sqhd:00b9 p:1 m:0 dnr:0 00:31:49.956 [2024-11-06 14:15:28.959061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:2352 len:8 PRP1 0x200004e48000 PRP2 0x0 00:31:49.956 [2024-11-06 14:15:28.959080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:49.956 [2024-11-06 14:15:29.015042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:191 nsid:1 lba:3600 len:8 PRP1 0x200004e4e000 PRP2 0x0 00:31:49.956 [2024-11-06 14:15:29.015062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:191 cdw0:0 sqhd:00cf p:0 m:0 dnr:0 00:31:51.860 [2024-11-06 14:15:30.805986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:185 nsid:1 lba:45632 len:8 PRP1 0x200004e42000 PRP2 0x0 00:31:51.860 [2024-11-06 14:15:30.806028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:185 cdw0:0 sqhd:004c p:1 m:0 dnr:0 00:31:52.118 [2024-11-06 14:15:31.389023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:177 nsid:1 lba:58624 len:8 PRP1 0x200004e4e000 PRP2 0x0 00:31:52.118 [2024-11-06 14:15:31.389050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:177 cdw0:0 sqhd:00ab p:1 m:0 dnr:0 00:31:52.378 [2024-11-06 14:15:31.597048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:183 nsid:1 lba:63328 len:8 PRP1 0x200004e5c000 PRP2 0x0 00:31:52.378 [2024-11-06 14:15:31.597072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:183 cdw0:0 sqhd:00f4 p:1 m:0 dnr:0 00:31:52.946 Initializing NVMe Controllers 00:31:52.946 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:52.946 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:52.946 Initialization complete. Launching workers. 00:31:52.946 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8716, failed: 7 00:31:52.946 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1226, failed to submit 7497 00:31:52.946 success 319, unsuccessful 907, failed 0 00:31:52.946 14:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:52.946 14:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:53.205 [2024-11-06 14:15:32.252967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:162 nsid:1 lba:1960 len:8 PRP1 0x200004b08000 PRP2 0x0 00:31:53.205 [2024-11-06 14:15:32.252991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:162 cdw0:0 sqhd:00e7 p:1 m:0 dnr:0 00:31:54.140 [2024-11-06 14:15:33.314225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:186 nsid:1 lba:127416 len:8 PRP1 0x200004ad4000 PRP2 0x0 00:31:54.140 [2024-11-06 14:15:33.314254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:186 cdw0:0 sqhd:0024 p:1 m:0 dnr:0 00:31:56.046 Initializing NVMe Controllers 00:31:56.046 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:56.046 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:56.046 Initialization complete. Launching workers. 00:31:56.046 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 44219, failed: 2 00:31:56.046 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2690, failed to submit 41531 00:31:56.046 success 588, unsuccessful 2102, failed 0 00:31:56.046 14:15:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:31:56.046 14:15:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.046 14:15:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:56.046 14:15:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.046 14:15:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:56.046 14:15:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.046 14:15:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:57.953 14:15:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.953 14:15:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1185305 00:31:57.953 14:15:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 1185305 ']' 00:31:57.953 14:15:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 1185305 00:31:57.953 14:15:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:31:57.953 14:15:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:57.953 14:15:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1185305 00:31:57.953 14:15:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:57.953 14:15:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:57.953 14:15:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1185305' 00:31:57.953 killing process with pid 1185305 00:31:57.953 14:15:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 1185305 00:31:57.953 14:15:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 1185305 00:31:58.212 00:31:58.212 real 0m12.040s 00:31:58.212 user 0m48.806s 00:31:58.212 sys 0m2.031s 00:31:58.212 14:15:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:58.212 14:15:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:58.212 ************************************ 00:31:58.212 END TEST spdk_target_abort 00:31:58.212 ************************************ 00:31:58.212 14:15:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:31:58.212 14:15:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:31:58.212 14:15:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:58.212 14:15:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:58.212 ************************************ 00:31:58.212 START TEST kernel_target_abort 00:31:58.212 ************************************ 00:31:58.212 14:15:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:31:58.212 14:15:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:31:58.212 14:15:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:31:58.212 14:15:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:58.212 14:15:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:58.212 14:15:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.212 14:15:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.212 14:15:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:58.212 14:15:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.212 14:15:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:58.212 14:15:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:58.212 14:15:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:58.212 14:15:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:58.212 14:15:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:58.212 14:15:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:31:58.212 14:15:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:58.212 14:15:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:58.212 14:15:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:58.212 14:15:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:31:58.212 14:15:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:31:58.212 14:15:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:31:58.212 14:15:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:58.212 14:15:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:00.747 Waiting for block devices as requested 00:32:00.747 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:00.747 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:00.747 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:00.747 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:00.747 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:00.747 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:00.747 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:00.747 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:00.747 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:01.036 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:01.036 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:01.036 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:01.036 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:01.329 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:01.329 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:01.329 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:01.329 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:01.329 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:32:01.329 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:01.329 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:32:01.329 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:32:01.329 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:01.329 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:32:01.329 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:32:01.329 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:32:01.329 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:01.329 No valid GPT data, bailing 00:32:01.329 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:01.329 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:32:01.329 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:32:01.330 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:32:01.330 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:32:01.589 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:01.589 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:01.589 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:01.589 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:01.589 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:32:01.589 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:32:01.589 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:32:01.589 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:32:01.589 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:32:01.589 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:32:01.589 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:32:01.589 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:01.589 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.1 -t tcp -s 4420 00:32:01.589 00:32:01.589 Discovery Log Number of Records 2, Generation counter 2 00:32:01.589 =====Discovery Log Entry 0====== 00:32:01.589 trtype: tcp 00:32:01.589 adrfam: ipv4 00:32:01.589 subtype: current discovery subsystem 00:32:01.589 treq: not specified, sq flow control disable supported 00:32:01.589 portid: 1 00:32:01.589 trsvcid: 4420 00:32:01.589 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:01.589 traddr: 10.0.0.1 00:32:01.589 eflags: none 00:32:01.589 sectype: none 00:32:01.589 =====Discovery Log Entry 1====== 00:32:01.589 trtype: tcp 00:32:01.589 adrfam: ipv4 00:32:01.589 subtype: nvme subsystem 00:32:01.589 treq: not specified, sq flow control disable supported 00:32:01.589 portid: 1 00:32:01.589 trsvcid: 4420 00:32:01.589 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:01.589 traddr: 10.0.0.1 00:32:01.589 eflags: none 00:32:01.589 sectype: none 00:32:01.589 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:32:01.589 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:01.589 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:01.589 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:01.589 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:01.589 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:01.589 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:01.589 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:01.589 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:01.589 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:01.589 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:01.589 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:01.589 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:01.589 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:01.590 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:01.590 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:01.590 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:01.590 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:01.590 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:01.590 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:01.590 14:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:04.876 Initializing NVMe Controllers 00:32:04.876 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:04.876 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:04.876 Initialization complete. Launching workers. 00:32:04.876 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 95203, failed: 0 00:32:04.876 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 95203, failed to submit 0 00:32:04.876 success 0, unsuccessful 95203, failed 0 00:32:04.876 14:15:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:04.876 14:15:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:08.164 Initializing NVMe Controllers 00:32:08.164 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:08.164 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:08.164 Initialization complete. Launching workers. 00:32:08.164 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 155042, failed: 0 00:32:08.164 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38982, failed to submit 116060 00:32:08.164 success 0, unsuccessful 38982, failed 0 00:32:08.164 14:15:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:08.164 14:15:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:10.697 Initializing NVMe Controllers 00:32:10.697 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:10.697 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:10.697 Initialization complete. Launching workers. 00:32:10.697 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146245, failed: 0 00:32:10.697 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36578, failed to submit 109667 00:32:10.697 success 0, unsuccessful 36578, failed 0 00:32:10.697 14:15:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:10.697 14:15:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:10.697 14:15:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:32:10.697 14:15:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:10.697 14:15:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:10.697 14:15:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:10.697 14:15:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:10.697 14:15:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:32:10.697 14:15:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:32:10.697 14:15:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:13.230 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:13.230 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:13.230 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:13.230 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:13.230 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:13.230 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:13.230 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:13.230 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:13.230 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:13.230 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:13.230 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:13.230 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:13.230 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:13.230 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:13.230 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:13.230 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:15.135 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:15.135 00:32:15.135 real 0m16.733s 00:32:15.135 user 0m8.569s 00:32:15.135 sys 0m3.983s 00:32:15.135 14:15:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:15.135 14:15:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:15.135 ************************************ 00:32:15.135 END TEST kernel_target_abort 00:32:15.135 ************************************ 00:32:15.135 14:15:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:15.135 14:15:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:15.135 14:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:15.135 14:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:32:15.135 14:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:15.135 14:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:32:15.135 14:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:15.135 14:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:15.135 rmmod nvme_tcp 00:32:15.135 rmmod nvme_fabrics 00:32:15.135 rmmod nvme_keyring 00:32:15.135 14:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:15.135 14:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:32:15.135 14:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:32:15.135 14:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1185305 ']' 00:32:15.135 14:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1185305 00:32:15.135 14:15:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 1185305 ']' 00:32:15.135 14:15:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 1185305 00:32:15.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1185305) - No such process 00:32:15.135 14:15:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 1185305 is not found' 00:32:15.135 Process with pid 1185305 is not found 00:32:15.135 14:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:32:15.135 14:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:17.037 Waiting for block devices as requested 00:32:17.296 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:17.296 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:17.296 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:17.296 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:17.296 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:17.556 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:17.556 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:17.556 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:17.556 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:17.815 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:17.815 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:17.815 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:18.074 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:18.074 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:18.074 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:18.074 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:18.333 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:18.333 14:15:57 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:18.333 14:15:57 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:18.333 14:15:57 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:32:18.333 14:15:57 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:18.333 14:15:57 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:32:18.333 14:15:57 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:32:18.333 14:15:57 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:18.333 14:15:57 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:18.333 14:15:57 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:18.333 14:15:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:18.333 14:15:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:20.236 14:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:20.236 00:32:20.236 real 0m42.825s 00:32:20.236 user 1m0.807s 00:32:20.236 sys 0m13.190s 00:32:20.236 14:15:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:20.236 14:15:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:20.236 ************************************ 00:32:20.236 END TEST nvmf_abort_qd_sizes 00:32:20.236 ************************************ 00:32:20.236 14:15:59 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:20.236 14:15:59 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:20.236 14:15:59 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:20.236 14:15:59 -- common/autotest_common.sh@10 -- # set +x 00:32:20.236 ************************************ 00:32:20.236 START TEST keyring_file 00:32:20.236 ************************************ 00:32:20.236 14:15:59 keyring_file -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:20.505 * Looking for test storage... 00:32:20.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:20.505 14:15:59 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:20.505 14:15:59 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:32:20.505 14:15:59 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:20.505 14:15:59 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:20.505 14:15:59 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:20.505 14:15:59 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:20.505 14:15:59 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:20.505 14:15:59 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:32:20.505 14:15:59 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:32:20.505 14:15:59 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:32:20.505 14:15:59 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:32:20.505 14:15:59 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:32:20.505 14:15:59 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:32:20.505 14:15:59 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:32:20.505 14:15:59 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:20.505 14:15:59 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:32:20.505 14:15:59 keyring_file -- scripts/common.sh@345 -- # : 1 00:32:20.505 14:15:59 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:20.505 14:15:59 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:20.505 14:15:59 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:32:20.505 14:15:59 keyring_file -- scripts/common.sh@353 -- # local d=1 00:32:20.505 14:15:59 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:20.505 14:15:59 keyring_file -- scripts/common.sh@355 -- # echo 1 00:32:20.505 14:15:59 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:32:20.505 14:15:59 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:32:20.505 14:15:59 keyring_file -- scripts/common.sh@353 -- # local d=2 00:32:20.505 14:15:59 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:20.505 14:15:59 keyring_file -- scripts/common.sh@355 -- # echo 2 00:32:20.505 14:15:59 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:32:20.505 14:15:59 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:20.505 14:15:59 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:20.505 14:15:59 keyring_file -- scripts/common.sh@368 -- # return 0 00:32:20.505 14:15:59 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:20.505 14:15:59 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:20.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.505 --rc genhtml_branch_coverage=1 00:32:20.505 --rc genhtml_function_coverage=1 00:32:20.505 --rc genhtml_legend=1 00:32:20.505 --rc geninfo_all_blocks=1 00:32:20.505 --rc geninfo_unexecuted_blocks=1 00:32:20.505 00:32:20.505 ' 00:32:20.505 14:15:59 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:20.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.506 --rc genhtml_branch_coverage=1 00:32:20.506 --rc genhtml_function_coverage=1 00:32:20.506 --rc genhtml_legend=1 00:32:20.506 --rc geninfo_all_blocks=1 00:32:20.506 --rc geninfo_unexecuted_blocks=1 00:32:20.506 00:32:20.506 ' 00:32:20.506 14:15:59 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:20.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.506 --rc genhtml_branch_coverage=1 00:32:20.506 --rc genhtml_function_coverage=1 00:32:20.506 --rc genhtml_legend=1 00:32:20.506 --rc geninfo_all_blocks=1 00:32:20.506 --rc geninfo_unexecuted_blocks=1 00:32:20.506 00:32:20.506 ' 00:32:20.506 14:15:59 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:20.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.506 --rc genhtml_branch_coverage=1 00:32:20.506 --rc genhtml_function_coverage=1 00:32:20.506 --rc genhtml_legend=1 00:32:20.506 --rc geninfo_all_blocks=1 00:32:20.506 --rc geninfo_unexecuted_blocks=1 00:32:20.506 00:32:20.506 ' 00:32:20.506 14:15:59 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:20.506 14:15:59 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:20.506 14:15:59 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:32:20.506 14:15:59 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:20.506 14:15:59 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:20.506 14:15:59 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:20.506 14:15:59 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:20.506 14:15:59 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:20.506 14:15:59 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:20.506 14:15:59 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:20.506 14:15:59 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:20.506 14:15:59 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:20.506 14:15:59 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:20.506 14:15:59 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:20.506 14:15:59 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:20.506 14:15:59 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:20.506 14:15:59 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:20.506 14:15:59 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:20.506 14:15:59 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:20.506 14:15:59 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:20.506 14:15:59 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:32:20.506 14:15:59 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:20.506 14:15:59 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:20.506 14:15:59 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:20.506 14:15:59 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.506 14:15:59 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.506 14:15:59 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.506 14:15:59 keyring_file -- paths/export.sh@5 -- # export PATH 00:32:20.506 14:15:59 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.506 14:15:59 keyring_file -- nvmf/common.sh@51 -- # : 0 00:32:20.506 14:15:59 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:20.506 14:15:59 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:20.506 14:15:59 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:20.506 14:15:59 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:20.506 14:15:59 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:20.506 14:15:59 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:20.506 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:20.506 14:15:59 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:20.506 14:15:59 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:20.506 14:15:59 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:20.506 14:15:59 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:20.506 14:15:59 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:20.506 14:15:59 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:20.506 14:15:59 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:20.506 14:15:59 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:20.506 14:15:59 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:20.506 14:15:59 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:20.506 14:15:59 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:20.506 14:15:59 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:20.506 14:15:59 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:20.506 14:15:59 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:20.506 14:15:59 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:20.506 14:15:59 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Q1cvxR2OYG 00:32:20.506 14:15:59 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:20.506 14:15:59 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:20.506 14:15:59 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:32:20.506 14:15:59 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:20.506 14:15:59 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:32:20.506 14:15:59 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:32:20.506 14:15:59 keyring_file -- nvmf/common.sh@733 -- # python - 00:32:20.506 14:15:59 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Q1cvxR2OYG 00:32:20.506 14:15:59 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Q1cvxR2OYG 00:32:20.506 14:15:59 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.Q1cvxR2OYG 00:32:20.506 14:15:59 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:20.506 14:15:59 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:20.506 14:15:59 keyring_file -- keyring/common.sh@17 -- # name=key1 00:32:20.506 14:15:59 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:20.506 14:15:59 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:20.506 14:15:59 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:20.506 14:15:59 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.mK06Je1VFT 00:32:20.506 14:15:59 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:20.507 14:15:59 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:20.507 14:15:59 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:32:20.507 14:15:59 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:20.507 14:15:59 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:32:20.507 14:15:59 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:32:20.507 14:15:59 keyring_file -- nvmf/common.sh@733 -- # python - 00:32:20.507 14:15:59 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.mK06Je1VFT 00:32:20.507 14:15:59 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.mK06Je1VFT 00:32:20.507 14:15:59 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.mK06Je1VFT 00:32:20.507 14:15:59 keyring_file -- keyring/file.sh@30 -- # tgtpid=1195824 00:32:20.507 14:15:59 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1195824 00:32:20.507 14:15:59 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 1195824 ']' 00:32:20.507 14:15:59 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:20.507 14:15:59 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:20.507 14:15:59 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:20.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:20.507 14:15:59 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:20.507 14:15:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:20.507 14:15:59 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:20.507 [2024-11-06 14:15:59.765920] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:32:20.507 [2024-11-06 14:15:59.765992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195824 ] 00:32:20.767 [2024-11-06 14:15:59.849939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:20.767 [2024-11-06 14:15:59.903558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:21.336 14:16:00 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:21.336 14:16:00 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:32:21.336 14:16:00 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:32:21.336 14:16:00 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.336 14:16:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:21.336 [2024-11-06 14:16:00.572383] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:21.336 null0 00:32:21.336 [2024-11-06 14:16:00.604470] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:21.336 [2024-11-06 14:16:00.604863] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:21.597 14:16:00 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.597 14:16:00 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:21.597 14:16:00 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:32:21.597 14:16:00 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:21.597 14:16:00 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:21.597 14:16:00 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:21.597 14:16:00 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:21.597 14:16:00 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:21.597 14:16:00 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:21.597 14:16:00 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.597 14:16:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:21.597 [2024-11-06 14:16:00.632484] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:32:21.597 request: 00:32:21.597 { 00:32:21.597 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:21.597 "secure_channel": false, 00:32:21.597 "listen_address": { 00:32:21.597 "trtype": "tcp", 00:32:21.597 "traddr": "127.0.0.1", 00:32:21.597 "trsvcid": "4420" 00:32:21.597 }, 00:32:21.597 "method": "nvmf_subsystem_add_listener", 00:32:21.597 "req_id": 1 00:32:21.597 } 00:32:21.597 Got JSON-RPC error response 00:32:21.597 response: 00:32:21.597 { 00:32:21.597 "code": -32602, 00:32:21.597 "message": "Invalid parameters" 00:32:21.597 } 00:32:21.597 14:16:00 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:21.597 14:16:00 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:32:21.597 14:16:00 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:21.597 14:16:00 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:21.597 14:16:00 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:21.597 14:16:00 keyring_file -- keyring/file.sh@47 -- # bperfpid=1195953 00:32:21.597 14:16:00 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1195953 /var/tmp/bperf.sock 00:32:21.597 14:16:00 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 1195953 ']' 00:32:21.597 14:16:00 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:21.597 14:16:00 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:21.597 14:16:00 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:21.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:21.597 14:16:00 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:21.597 14:16:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:21.597 14:16:00 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:21.597 [2024-11-06 14:16:00.672001] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:32:21.597 [2024-11-06 14:16:00.672049] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195953 ] 00:32:21.597 [2024-11-06 14:16:00.748512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.597 [2024-11-06 14:16:00.785434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:22.165 14:16:01 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:22.165 14:16:01 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:32:22.165 14:16:01 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Q1cvxR2OYG 00:32:22.165 14:16:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Q1cvxR2OYG 00:32:22.425 14:16:01 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.mK06Je1VFT 00:32:22.425 14:16:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.mK06Je1VFT 00:32:22.684 14:16:01 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:32:22.684 14:16:01 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:22.684 14:16:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:22.684 14:16:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:22.684 14:16:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:22.684 14:16:01 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.Q1cvxR2OYG == \/\t\m\p\/\t\m\p\.\Q\1\c\v\x\R\2\O\Y\G ]] 00:32:22.684 14:16:01 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:32:22.684 14:16:01 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:32:22.684 14:16:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:22.684 14:16:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:22.684 14:16:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:22.943 14:16:02 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.mK06Je1VFT == \/\t\m\p\/\t\m\p\.\m\K\0\6\J\e\1\V\F\T ]] 00:32:22.943 14:16:02 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:32:22.943 14:16:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:22.943 14:16:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:22.943 14:16:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:22.943 14:16:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:22.943 14:16:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:23.203 14:16:02 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:23.203 14:16:02 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:32:23.203 14:16:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:23.203 14:16:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:23.203 14:16:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:23.203 14:16:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:23.203 14:16:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:23.203 14:16:02 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:32:23.203 14:16:02 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:23.203 14:16:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:23.462 [2024-11-06 14:16:02.589556] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:23.462 nvme0n1 00:32:23.462 14:16:02 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:32:23.462 14:16:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:23.462 14:16:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:23.462 14:16:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:23.462 14:16:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:23.463 14:16:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:23.722 14:16:02 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:32:23.722 14:16:02 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:32:23.722 14:16:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:23.722 14:16:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:23.722 14:16:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:23.722 14:16:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:23.722 14:16:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:23.722 14:16:03 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:32:23.722 14:16:03 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:23.981 Running I/O for 1 seconds... 00:32:24.919 21466.00 IOPS, 83.85 MiB/s 00:32:24.919 Latency(us) 00:32:24.919 [2024-11-06T13:16:04.203Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:24.919 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:24.919 nvme0n1 : 1.00 21514.22 84.04 0.00 0.00 5939.45 2389.33 14636.37 00:32:24.919 [2024-11-06T13:16:04.203Z] =================================================================================================================== 00:32:24.919 [2024-11-06T13:16:04.203Z] Total : 21514.22 84.04 0.00 0.00 5939.45 2389.33 14636.37 00:32:24.919 { 00:32:24.919 "results": [ 00:32:24.919 { 00:32:24.919 "job": "nvme0n1", 00:32:24.919 "core_mask": "0x2", 00:32:24.919 "workload": "randrw", 00:32:24.919 "percentage": 50, 00:32:24.919 "status": "finished", 00:32:24.919 "queue_depth": 128, 00:32:24.920 "io_size": 4096, 00:32:24.920 "runtime": 1.003801, 00:32:24.920 "iops": 21514.22443293043, 00:32:24.920 "mibps": 84.0399391911345, 00:32:24.920 "io_failed": 0, 00:32:24.920 "io_timeout": 0, 00:32:24.920 "avg_latency_us": 5939.453518552818, 00:32:24.920 "min_latency_us": 2389.3333333333335, 00:32:24.920 "max_latency_us": 14636.373333333333 00:32:24.920 } 00:32:24.920 ], 00:32:24.920 "core_count": 1 00:32:24.920 } 00:32:24.920 14:16:04 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:24.920 14:16:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:25.180 14:16:04 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:32:25.180 14:16:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:25.180 14:16:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:25.180 14:16:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:25.180 14:16:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:25.180 14:16:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:25.180 14:16:04 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:25.180 14:16:04 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:32:25.180 14:16:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:25.180 14:16:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:25.180 14:16:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:25.180 14:16:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:25.180 14:16:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:25.440 14:16:04 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:32:25.440 14:16:04 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:25.440 14:16:04 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:32:25.440 14:16:04 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:25.440 14:16:04 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:32:25.440 14:16:04 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:25.440 14:16:04 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:32:25.440 14:16:04 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:25.440 14:16:04 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:25.440 14:16:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:25.700 [2024-11-06 14:16:04.736408] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:25.700 [2024-11-06 14:16:04.737150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f499d0 (107): Transport endpoint is not connected 00:32:25.700 [2024-11-06 14:16:04.738146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f499d0 (9): Bad file descriptor 00:32:25.700 [2024-11-06 14:16:04.739148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:32:25.700 [2024-11-06 14:16:04.739158] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:25.700 [2024-11-06 14:16:04.739164] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:32:25.700 [2024-11-06 14:16:04.739170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:32:25.700 request: 00:32:25.700 { 00:32:25.700 "name": "nvme0", 00:32:25.700 "trtype": "tcp", 00:32:25.700 "traddr": "127.0.0.1", 00:32:25.700 "adrfam": "ipv4", 00:32:25.700 "trsvcid": "4420", 00:32:25.700 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:25.700 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:25.700 "prchk_reftag": false, 00:32:25.700 "prchk_guard": false, 00:32:25.700 "hdgst": false, 00:32:25.700 "ddgst": false, 00:32:25.700 "psk": "key1", 00:32:25.700 "allow_unrecognized_csi": false, 00:32:25.700 "method": "bdev_nvme_attach_controller", 00:32:25.700 "req_id": 1 00:32:25.700 } 00:32:25.700 Got JSON-RPC error response 00:32:25.700 response: 00:32:25.700 { 00:32:25.700 "code": -5, 00:32:25.700 "message": "Input/output error" 00:32:25.700 } 00:32:25.700 14:16:04 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:32:25.700 14:16:04 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:25.700 14:16:04 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:25.700 14:16:04 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:25.701 14:16:04 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:32:25.701 14:16:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:25.701 14:16:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:25.701 14:16:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:25.701 14:16:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:25.701 14:16:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:25.701 14:16:04 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:25.701 14:16:04 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:32:25.701 14:16:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:25.701 14:16:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:25.701 14:16:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:25.701 14:16:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:25.701 14:16:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:25.959 14:16:05 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:32:25.959 14:16:05 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:32:25.959 14:16:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:26.217 14:16:05 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:32:26.217 14:16:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:26.217 14:16:05 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:32:26.217 14:16:05 keyring_file -- keyring/file.sh@78 -- # jq length 00:32:26.217 14:16:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:26.477 14:16:05 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:32:26.477 14:16:05 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.Q1cvxR2OYG 00:32:26.477 14:16:05 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.Q1cvxR2OYG 00:32:26.477 14:16:05 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:32:26.477 14:16:05 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.Q1cvxR2OYG 00:32:26.477 14:16:05 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:32:26.477 14:16:05 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:26.477 14:16:05 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:32:26.477 14:16:05 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:26.477 14:16:05 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Q1cvxR2OYG 00:32:26.477 14:16:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Q1cvxR2OYG 00:32:26.477 [2024-11-06 14:16:05.701710] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Q1cvxR2OYG': 0100660 00:32:26.477 [2024-11-06 14:16:05.701728] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:26.477 request: 00:32:26.477 { 00:32:26.477 "name": "key0", 00:32:26.477 "path": "/tmp/tmp.Q1cvxR2OYG", 00:32:26.477 "method": "keyring_file_add_key", 00:32:26.477 "req_id": 1 00:32:26.477 } 00:32:26.477 Got JSON-RPC error response 00:32:26.477 response: 00:32:26.477 { 00:32:26.477 "code": -1, 00:32:26.477 "message": "Operation not permitted" 00:32:26.477 } 00:32:26.477 14:16:05 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:32:26.477 14:16:05 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:26.477 14:16:05 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:26.477 14:16:05 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:26.477 14:16:05 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.Q1cvxR2OYG 00:32:26.477 14:16:05 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Q1cvxR2OYG 00:32:26.477 14:16:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Q1cvxR2OYG 00:32:26.737 14:16:05 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.Q1cvxR2OYG 00:32:26.737 14:16:05 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:32:26.737 14:16:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:26.737 14:16:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:26.737 14:16:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:26.737 14:16:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:26.737 14:16:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:26.996 14:16:06 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:32:26.996 14:16:06 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:26.996 14:16:06 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:32:26.996 14:16:06 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:26.996 14:16:06 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:32:26.996 14:16:06 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:26.996 14:16:06 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:32:26.996 14:16:06 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:26.996 14:16:06 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:26.996 14:16:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:26.996 [2024-11-06 14:16:06.174912] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.Q1cvxR2OYG': No such file or directory 00:32:26.996 [2024-11-06 14:16:06.174926] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:26.996 [2024-11-06 14:16:06.174939] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:26.996 [2024-11-06 14:16:06.174945] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:32:26.996 [2024-11-06 14:16:06.174950] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:26.996 [2024-11-06 14:16:06.174955] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:26.996 request: 00:32:26.996 { 00:32:26.996 "name": "nvme0", 00:32:26.996 "trtype": "tcp", 00:32:26.996 "traddr": "127.0.0.1", 00:32:26.996 "adrfam": "ipv4", 00:32:26.996 "trsvcid": "4420", 00:32:26.996 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:26.996 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:26.996 "prchk_reftag": false, 00:32:26.996 "prchk_guard": false, 00:32:26.996 "hdgst": false, 00:32:26.996 "ddgst": false, 00:32:26.996 "psk": "key0", 00:32:26.996 "allow_unrecognized_csi": false, 00:32:26.996 "method": "bdev_nvme_attach_controller", 00:32:26.996 "req_id": 1 00:32:26.996 } 00:32:26.996 Got JSON-RPC error response 00:32:26.996 response: 00:32:26.996 { 00:32:26.996 "code": -19, 00:32:26.996 "message": "No such device" 00:32:26.996 } 00:32:26.996 14:16:06 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:32:26.996 14:16:06 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:26.996 14:16:06 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:26.996 14:16:06 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:26.996 14:16:06 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:32:26.996 14:16:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:27.254 14:16:06 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:27.254 14:16:06 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:27.254 14:16:06 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:27.254 14:16:06 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:27.254 14:16:06 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:27.254 14:16:06 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:27.254 14:16:06 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.PAKiXnlviL 00:32:27.254 14:16:06 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:27.254 14:16:06 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:27.254 14:16:06 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:32:27.254 14:16:06 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:27.254 14:16:06 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:32:27.254 14:16:06 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:32:27.254 14:16:06 keyring_file -- nvmf/common.sh@733 -- # python - 00:32:27.254 14:16:06 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.PAKiXnlviL 00:32:27.254 14:16:06 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.PAKiXnlviL 00:32:27.254 14:16:06 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.PAKiXnlviL 00:32:27.254 14:16:06 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PAKiXnlviL 00:32:27.254 14:16:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PAKiXnlviL 00:32:27.514 14:16:06 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:27.514 14:16:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:27.514 nvme0n1 00:32:27.514 14:16:06 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:32:27.514 14:16:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:27.514 14:16:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:27.514 14:16:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:27.514 14:16:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:27.514 14:16:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:27.773 14:16:06 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:32:27.773 14:16:06 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:32:27.773 14:16:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:28.032 14:16:07 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:32:28.033 14:16:07 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:32:28.033 14:16:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:28.033 14:16:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:28.033 14:16:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:28.033 14:16:07 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:32:28.033 14:16:07 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:32:28.033 14:16:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:28.033 14:16:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:28.033 14:16:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:28.033 14:16:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:28.033 14:16:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:28.292 14:16:07 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:32:28.292 14:16:07 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:28.292 14:16:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:28.552 14:16:07 keyring_file -- keyring/file.sh@105 -- # jq length 00:32:28.552 14:16:07 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:32:28.552 14:16:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:28.552 14:16:07 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:32:28.552 14:16:07 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PAKiXnlviL 00:32:28.552 14:16:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PAKiXnlviL 00:32:28.811 14:16:07 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.mK06Je1VFT 00:32:28.811 14:16:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.mK06Je1VFT 00:32:28.811 14:16:08 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:28.811 14:16:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:29.071 nvme0n1 00:32:29.071 14:16:08 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:32:29.071 14:16:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:29.331 14:16:08 keyring_file -- keyring/file.sh@113 -- # config='{ 00:32:29.331 "subsystems": [ 00:32:29.331 { 00:32:29.331 "subsystem": "keyring", 00:32:29.331 "config": [ 00:32:29.331 { 00:32:29.331 "method": "keyring_file_add_key", 00:32:29.331 "params": { 00:32:29.331 "name": "key0", 00:32:29.331 "path": "/tmp/tmp.PAKiXnlviL" 00:32:29.331 } 00:32:29.331 }, 00:32:29.331 { 00:32:29.331 "method": "keyring_file_add_key", 00:32:29.331 "params": { 00:32:29.331 "name": "key1", 00:32:29.331 "path": "/tmp/tmp.mK06Je1VFT" 00:32:29.331 } 00:32:29.331 } 00:32:29.331 ] 00:32:29.331 }, 00:32:29.331 { 00:32:29.331 "subsystem": "iobuf", 00:32:29.331 "config": [ 00:32:29.331 { 00:32:29.331 "method": "iobuf_set_options", 00:32:29.331 "params": { 00:32:29.331 "small_pool_count": 8192, 00:32:29.331 "large_pool_count": 1024, 00:32:29.331 "small_bufsize": 8192, 00:32:29.331 "large_bufsize": 135168, 00:32:29.331 "enable_numa": false 00:32:29.331 } 00:32:29.331 } 00:32:29.331 ] 00:32:29.331 }, 00:32:29.331 { 00:32:29.331 "subsystem": "sock", 00:32:29.331 "config": [ 00:32:29.331 { 00:32:29.331 "method": "sock_set_default_impl", 00:32:29.331 "params": { 00:32:29.331 "impl_name": "posix" 00:32:29.331 } 00:32:29.331 }, 00:32:29.331 { 00:32:29.331 "method": "sock_impl_set_options", 00:32:29.331 "params": { 00:32:29.331 "impl_name": "ssl", 00:32:29.331 "recv_buf_size": 4096, 00:32:29.331 "send_buf_size": 4096, 00:32:29.331 "enable_recv_pipe": true, 00:32:29.331 "enable_quickack": false, 00:32:29.331 "enable_placement_id": 0, 00:32:29.331 "enable_zerocopy_send_server": true, 00:32:29.331 "enable_zerocopy_send_client": false, 00:32:29.331 "zerocopy_threshold": 0, 00:32:29.331 "tls_version": 0, 00:32:29.331 "enable_ktls": false 00:32:29.331 } 00:32:29.331 }, 00:32:29.331 { 00:32:29.331 "method": "sock_impl_set_options", 00:32:29.331 "params": { 00:32:29.331 "impl_name": "posix", 00:32:29.331 "recv_buf_size": 2097152, 00:32:29.331 "send_buf_size": 2097152, 00:32:29.331 "enable_recv_pipe": true, 00:32:29.331 "enable_quickack": false, 00:32:29.331 "enable_placement_id": 0, 00:32:29.331 "enable_zerocopy_send_server": true, 00:32:29.331 "enable_zerocopy_send_client": false, 00:32:29.331 "zerocopy_threshold": 0, 00:32:29.331 "tls_version": 0, 00:32:29.331 "enable_ktls": false 00:32:29.331 } 00:32:29.331 } 00:32:29.331 ] 00:32:29.331 }, 00:32:29.331 { 00:32:29.331 "subsystem": "vmd", 00:32:29.331 "config": [] 00:32:29.331 }, 00:32:29.331 { 00:32:29.331 "subsystem": "accel", 00:32:29.331 "config": [ 00:32:29.331 { 00:32:29.331 "method": "accel_set_options", 00:32:29.331 "params": { 00:32:29.331 "small_cache_size": 128, 00:32:29.331 "large_cache_size": 16, 00:32:29.331 "task_count": 2048, 00:32:29.331 "sequence_count": 2048, 00:32:29.331 "buf_count": 2048 00:32:29.331 } 00:32:29.331 } 00:32:29.331 ] 00:32:29.331 }, 00:32:29.331 { 00:32:29.331 "subsystem": "bdev", 00:32:29.331 "config": [ 00:32:29.331 { 00:32:29.331 "method": "bdev_set_options", 00:32:29.331 "params": { 00:32:29.331 "bdev_io_pool_size": 65535, 00:32:29.331 "bdev_io_cache_size": 256, 00:32:29.331 "bdev_auto_examine": true, 00:32:29.331 "iobuf_small_cache_size": 128, 00:32:29.331 "iobuf_large_cache_size": 16 00:32:29.331 } 00:32:29.331 }, 00:32:29.331 { 00:32:29.331 "method": "bdev_raid_set_options", 00:32:29.331 "params": { 00:32:29.331 "process_window_size_kb": 1024, 00:32:29.331 "process_max_bandwidth_mb_sec": 0 00:32:29.331 } 00:32:29.331 }, 00:32:29.331 { 00:32:29.331 "method": "bdev_iscsi_set_options", 00:32:29.331 "params": { 00:32:29.331 "timeout_sec": 30 00:32:29.331 } 00:32:29.331 }, 00:32:29.331 { 00:32:29.331 "method": "bdev_nvme_set_options", 00:32:29.331 "params": { 00:32:29.331 "action_on_timeout": "none", 00:32:29.331 "timeout_us": 0, 00:32:29.331 "timeout_admin_us": 0, 00:32:29.331 "keep_alive_timeout_ms": 10000, 00:32:29.331 "arbitration_burst": 0, 00:32:29.331 "low_priority_weight": 0, 00:32:29.331 "medium_priority_weight": 0, 00:32:29.331 "high_priority_weight": 0, 00:32:29.331 "nvme_adminq_poll_period_us": 10000, 00:32:29.331 "nvme_ioq_poll_period_us": 0, 00:32:29.331 "io_queue_requests": 512, 00:32:29.331 "delay_cmd_submit": true, 00:32:29.331 "transport_retry_count": 4, 00:32:29.331 "bdev_retry_count": 3, 00:32:29.331 "transport_ack_timeout": 0, 00:32:29.331 "ctrlr_loss_timeout_sec": 0, 00:32:29.331 "reconnect_delay_sec": 0, 00:32:29.331 "fast_io_fail_timeout_sec": 0, 00:32:29.331 "disable_auto_failback": false, 00:32:29.331 "generate_uuids": false, 00:32:29.331 "transport_tos": 0, 00:32:29.331 "nvme_error_stat": false, 00:32:29.331 "rdma_srq_size": 0, 00:32:29.331 "io_path_stat": false, 00:32:29.331 "allow_accel_sequence": false, 00:32:29.331 "rdma_max_cq_size": 0, 00:32:29.331 "rdma_cm_event_timeout_ms": 0, 00:32:29.331 "dhchap_digests": [ 00:32:29.331 "sha256", 00:32:29.331 "sha384", 00:32:29.331 "sha512" 00:32:29.332 ], 00:32:29.332 "dhchap_dhgroups": [ 00:32:29.332 "null", 00:32:29.332 "ffdhe2048", 00:32:29.332 "ffdhe3072", 00:32:29.332 "ffdhe4096", 00:32:29.332 "ffdhe6144", 00:32:29.332 "ffdhe8192" 00:32:29.332 ] 00:32:29.332 } 00:32:29.332 }, 00:32:29.332 { 00:32:29.332 "method": "bdev_nvme_attach_controller", 00:32:29.332 "params": { 00:32:29.332 "name": "nvme0", 00:32:29.332 "trtype": "TCP", 00:32:29.332 "adrfam": "IPv4", 00:32:29.332 "traddr": "127.0.0.1", 00:32:29.332 "trsvcid": "4420", 00:32:29.332 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:29.332 "prchk_reftag": false, 00:32:29.332 "prchk_guard": false, 00:32:29.332 "ctrlr_loss_timeout_sec": 0, 00:32:29.332 "reconnect_delay_sec": 0, 00:32:29.332 "fast_io_fail_timeout_sec": 0, 00:32:29.332 "psk": "key0", 00:32:29.332 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:29.332 "hdgst": false, 00:32:29.332 "ddgst": false, 00:32:29.332 "multipath": "multipath" 00:32:29.332 } 00:32:29.332 }, 00:32:29.332 { 00:32:29.332 "method": "bdev_nvme_set_hotplug", 00:32:29.332 "params": { 00:32:29.332 "period_us": 100000, 00:32:29.332 "enable": false 00:32:29.332 } 00:32:29.332 }, 00:32:29.332 { 00:32:29.332 "method": "bdev_wait_for_examine" 00:32:29.332 } 00:32:29.332 ] 00:32:29.332 }, 00:32:29.332 { 00:32:29.332 "subsystem": "nbd", 00:32:29.332 "config": [] 00:32:29.332 } 00:32:29.332 ] 00:32:29.332 }' 00:32:29.332 14:16:08 keyring_file -- keyring/file.sh@115 -- # killprocess 1195953 00:32:29.332 14:16:08 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 1195953 ']' 00:32:29.332 14:16:08 keyring_file -- common/autotest_common.sh@956 -- # kill -0 1195953 00:32:29.332 14:16:08 keyring_file -- common/autotest_common.sh@957 -- # uname 00:32:29.332 14:16:08 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:29.332 14:16:08 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1195953 00:32:29.332 14:16:08 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:29.332 14:16:08 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:29.332 14:16:08 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1195953' 00:32:29.332 killing process with pid 1195953 00:32:29.332 14:16:08 keyring_file -- common/autotest_common.sh@971 -- # kill 1195953 00:32:29.332 Received shutdown signal, test time was about 1.000000 seconds 00:32:29.332 00:32:29.332 Latency(us) 00:32:29.332 [2024-11-06T13:16:08.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:29.332 [2024-11-06T13:16:08.616Z] =================================================================================================================== 00:32:29.332 [2024-11-06T13:16:08.616Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:29.332 14:16:08 keyring_file -- common/autotest_common.sh@976 -- # wait 1195953 00:32:29.332 14:16:08 keyring_file -- keyring/file.sh@118 -- # bperfpid=1197805 00:32:29.332 14:16:08 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1197805 /var/tmp/bperf.sock 00:32:29.332 14:16:08 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 1197805 ']' 00:32:29.332 14:16:08 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:29.332 14:16:08 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:29.332 14:16:08 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:29.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:29.593 14:16:08 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:29.593 14:16:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:29.593 14:16:08 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:29.593 14:16:08 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:32:29.593 "subsystems": [ 00:32:29.593 { 00:32:29.593 "subsystem": "keyring", 00:32:29.593 "config": [ 00:32:29.593 { 00:32:29.593 "method": "keyring_file_add_key", 00:32:29.593 "params": { 00:32:29.593 "name": "key0", 00:32:29.593 "path": "/tmp/tmp.PAKiXnlviL" 00:32:29.593 } 00:32:29.593 }, 00:32:29.593 { 00:32:29.593 "method": "keyring_file_add_key", 00:32:29.593 "params": { 00:32:29.593 "name": "key1", 00:32:29.593 "path": "/tmp/tmp.mK06Je1VFT" 00:32:29.593 } 00:32:29.593 } 00:32:29.593 ] 00:32:29.593 }, 00:32:29.593 { 00:32:29.593 "subsystem": "iobuf", 00:32:29.593 "config": [ 00:32:29.593 { 00:32:29.593 "method": "iobuf_set_options", 00:32:29.593 "params": { 00:32:29.593 "small_pool_count": 8192, 00:32:29.593 "large_pool_count": 1024, 00:32:29.593 "small_bufsize": 8192, 00:32:29.593 "large_bufsize": 135168, 00:32:29.593 "enable_numa": false 00:32:29.593 } 00:32:29.593 } 00:32:29.593 ] 00:32:29.593 }, 00:32:29.593 { 00:32:29.593 "subsystem": "sock", 00:32:29.593 "config": [ 00:32:29.593 { 00:32:29.593 "method": "sock_set_default_impl", 00:32:29.593 "params": { 00:32:29.593 "impl_name": "posix" 00:32:29.593 } 00:32:29.593 }, 00:32:29.593 { 00:32:29.593 "method": "sock_impl_set_options", 00:32:29.593 "params": { 00:32:29.593 "impl_name": "ssl", 00:32:29.593 "recv_buf_size": 4096, 00:32:29.593 "send_buf_size": 4096, 00:32:29.593 "enable_recv_pipe": true, 00:32:29.593 "enable_quickack": false, 00:32:29.593 "enable_placement_id": 0, 00:32:29.593 "enable_zerocopy_send_server": true, 00:32:29.593 "enable_zerocopy_send_client": false, 00:32:29.593 "zerocopy_threshold": 0, 00:32:29.593 "tls_version": 0, 00:32:29.593 "enable_ktls": false 00:32:29.593 } 00:32:29.593 }, 00:32:29.593 { 00:32:29.593 "method": "sock_impl_set_options", 00:32:29.593 "params": { 00:32:29.593 "impl_name": "posix", 00:32:29.593 "recv_buf_size": 2097152, 00:32:29.593 "send_buf_size": 2097152, 00:32:29.593 "enable_recv_pipe": true, 00:32:29.593 "enable_quickack": false, 00:32:29.593 "enable_placement_id": 0, 00:32:29.593 "enable_zerocopy_send_server": true, 00:32:29.593 "enable_zerocopy_send_client": false, 00:32:29.593 "zerocopy_threshold": 0, 00:32:29.593 "tls_version": 0, 00:32:29.593 "enable_ktls": false 00:32:29.593 } 00:32:29.593 } 00:32:29.593 ] 00:32:29.593 }, 00:32:29.593 { 00:32:29.593 "subsystem": "vmd", 00:32:29.593 "config": [] 00:32:29.593 }, 00:32:29.593 { 00:32:29.593 "subsystem": "accel", 00:32:29.593 "config": [ 00:32:29.593 { 00:32:29.593 "method": "accel_set_options", 00:32:29.593 "params": { 00:32:29.593 "small_cache_size": 128, 00:32:29.593 "large_cache_size": 16, 00:32:29.593 "task_count": 2048, 00:32:29.593 "sequence_count": 2048, 00:32:29.593 "buf_count": 2048 00:32:29.593 } 00:32:29.593 } 00:32:29.593 ] 00:32:29.593 }, 00:32:29.593 { 00:32:29.593 "subsystem": "bdev", 00:32:29.593 "config": [ 00:32:29.593 { 00:32:29.593 "method": "bdev_set_options", 00:32:29.593 "params": { 00:32:29.593 "bdev_io_pool_size": 65535, 00:32:29.593 "bdev_io_cache_size": 256, 00:32:29.593 "bdev_auto_examine": true, 00:32:29.593 "iobuf_small_cache_size": 128, 00:32:29.593 "iobuf_large_cache_size": 16 00:32:29.593 } 00:32:29.593 }, 00:32:29.593 { 00:32:29.593 "method": "bdev_raid_set_options", 00:32:29.593 "params": { 00:32:29.593 "process_window_size_kb": 1024, 00:32:29.593 "process_max_bandwidth_mb_sec": 0 00:32:29.593 } 00:32:29.593 }, 00:32:29.593 { 00:32:29.593 "method": "bdev_iscsi_set_options", 00:32:29.593 "params": { 00:32:29.593 "timeout_sec": 30 00:32:29.593 } 00:32:29.593 }, 00:32:29.593 { 00:32:29.593 "method": "bdev_nvme_set_options", 00:32:29.593 "params": { 00:32:29.593 "action_on_timeout": "none", 00:32:29.593 "timeout_us": 0, 00:32:29.593 "timeout_admin_us": 0, 00:32:29.593 "keep_alive_timeout_ms": 10000, 00:32:29.593 "arbitration_burst": 0, 00:32:29.593 "low_priority_weight": 0, 00:32:29.593 "medium_priority_weight": 0, 00:32:29.593 "high_priority_weight": 0, 00:32:29.593 "nvme_adminq_poll_period_us": 10000, 00:32:29.593 "nvme_ioq_poll_period_us": 0, 00:32:29.593 "io_queue_requests": 512, 00:32:29.593 "delay_cmd_submit": true, 00:32:29.593 "transport_retry_count": 4, 00:32:29.593 "bdev_retry_count": 3, 00:32:29.593 "transport_ack_timeout": 0, 00:32:29.593 "ctrlr_loss_timeout_sec": 0, 00:32:29.593 "reconnect_delay_sec": 0, 00:32:29.593 "fast_io_fail_timeout_sec": 0, 00:32:29.593 "disable_auto_failback": false, 00:32:29.593 "generate_uuids": false, 00:32:29.593 "transport_tos": 0, 00:32:29.593 "nvme_error_stat": false, 00:32:29.593 "rdma_srq_size": 0, 00:32:29.593 "io_path_stat": false, 00:32:29.593 "allow_accel_sequence": false, 00:32:29.593 "rdma_max_cq_size": 0, 00:32:29.593 "rdma_cm_event_timeout_ms": 0, 00:32:29.593 "dhchap_digests": [ 00:32:29.593 "sha256", 00:32:29.593 "sha384", 00:32:29.593 "sha512" 00:32:29.593 ], 00:32:29.593 "dhchap_dhgroups": [ 00:32:29.593 "null", 00:32:29.593 "ffdhe2048", 00:32:29.593 "ffdhe3072", 00:32:29.593 "ffdhe4096", 00:32:29.593 "ffdhe6144", 00:32:29.593 "ffdhe8192" 00:32:29.593 ] 00:32:29.593 } 00:32:29.593 }, 00:32:29.593 { 00:32:29.593 "method": "bdev_nvme_attach_controller", 00:32:29.594 "params": { 00:32:29.594 "name": "nvme0", 00:32:29.594 "trtype": "TCP", 00:32:29.594 "adrfam": "IPv4", 00:32:29.594 "traddr": "127.0.0.1", 00:32:29.594 "trsvcid": "4420", 00:32:29.594 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:29.594 "prchk_reftag": false, 00:32:29.594 "prchk_guard": false, 00:32:29.594 "ctrlr_loss_timeout_sec": 0, 00:32:29.594 "reconnect_delay_sec": 0, 00:32:29.594 "fast_io_fail_timeout_sec": 0, 00:32:29.594 "psk": "key0", 00:32:29.594 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:29.594 "hdgst": false, 00:32:29.594 "ddgst": false, 00:32:29.594 "multipath": "multipath" 00:32:29.594 } 00:32:29.594 }, 00:32:29.594 { 00:32:29.594 "method": "bdev_nvme_set_hotplug", 00:32:29.594 "params": { 00:32:29.594 "period_us": 100000, 00:32:29.594 "enable": false 00:32:29.594 } 00:32:29.594 }, 00:32:29.594 { 00:32:29.594 "method": "bdev_wait_for_examine" 00:32:29.594 } 00:32:29.594 ] 00:32:29.594 }, 00:32:29.594 { 00:32:29.594 "subsystem": "nbd", 00:32:29.594 "config": [] 00:32:29.594 } 00:32:29.594 ] 00:32:29.594 }' 00:32:29.594 [2024-11-06 14:16:08.637724] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:32:29.594 [2024-11-06 14:16:08.637770] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1197805 ] 00:32:29.594 [2024-11-06 14:16:08.692851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:29.594 [2024-11-06 14:16:08.722480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:29.594 [2024-11-06 14:16:08.866648] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:30.164 14:16:09 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:30.164 14:16:09 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:32:30.164 14:16:09 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:32:30.164 14:16:09 keyring_file -- keyring/file.sh@121 -- # jq length 00:32:30.164 14:16:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:30.423 14:16:09 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:30.423 14:16:09 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:32:30.423 14:16:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:30.423 14:16:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:30.423 14:16:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:30.423 14:16:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:30.423 14:16:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:30.682 14:16:09 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:32:30.682 14:16:09 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:32:30.682 14:16:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:30.682 14:16:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:30.682 14:16:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:30.682 14:16:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:30.682 14:16:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:30.682 14:16:09 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:32:30.682 14:16:09 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:32:30.682 14:16:09 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:32:30.682 14:16:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:30.941 14:16:10 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:32:30.941 14:16:10 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:30.941 14:16:10 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.PAKiXnlviL /tmp/tmp.mK06Je1VFT 00:32:30.941 14:16:10 keyring_file -- keyring/file.sh@20 -- # killprocess 1197805 00:32:30.941 14:16:10 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 1197805 ']' 00:32:30.941 14:16:10 keyring_file -- common/autotest_common.sh@956 -- # kill -0 1197805 00:32:30.941 14:16:10 keyring_file -- common/autotest_common.sh@957 -- # uname 00:32:30.941 14:16:10 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:30.941 14:16:10 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1197805 00:32:30.941 14:16:10 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:30.941 14:16:10 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:30.941 14:16:10 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1197805' 00:32:30.941 killing process with pid 1197805 00:32:30.941 14:16:10 keyring_file -- common/autotest_common.sh@971 -- # kill 1197805 00:32:30.941 Received shutdown signal, test time was about 1.000000 seconds 00:32:30.941 00:32:30.941 Latency(us) 00:32:30.941 [2024-11-06T13:16:10.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:30.941 [2024-11-06T13:16:10.225Z] =================================================================================================================== 00:32:30.941 [2024-11-06T13:16:10.225Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:30.941 14:16:10 keyring_file -- common/autotest_common.sh@976 -- # wait 1197805 00:32:31.199 14:16:10 keyring_file -- keyring/file.sh@21 -- # killprocess 1195824 00:32:31.199 14:16:10 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 1195824 ']' 00:32:31.199 14:16:10 keyring_file -- common/autotest_common.sh@956 -- # kill -0 1195824 00:32:31.199 14:16:10 keyring_file -- common/autotest_common.sh@957 -- # uname 00:32:31.199 14:16:10 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:31.199 14:16:10 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1195824 00:32:31.199 14:16:10 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:31.199 14:16:10 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:31.199 14:16:10 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1195824' 00:32:31.199 killing process with pid 1195824 00:32:31.199 14:16:10 keyring_file -- common/autotest_common.sh@971 -- # kill 1195824 00:32:31.199 14:16:10 keyring_file -- common/autotest_common.sh@976 -- # wait 1195824 00:32:31.199 00:32:31.199 real 0m10.961s 00:32:31.199 user 0m26.102s 00:32:31.199 sys 0m2.230s 00:32:31.199 14:16:10 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:31.199 14:16:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:31.199 ************************************ 00:32:31.199 END TEST keyring_file 00:32:31.199 ************************************ 00:32:31.459 14:16:10 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:32:31.459 14:16:10 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:31.459 14:16:10 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:32:31.459 14:16:10 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:31.459 14:16:10 -- common/autotest_common.sh@10 -- # set +x 00:32:31.459 ************************************ 00:32:31.459 START TEST keyring_linux 00:32:31.459 ************************************ 00:32:31.459 14:16:10 keyring_linux -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:31.459 Joined session keyring: 1025292412 00:32:31.459 * Looking for test storage... 00:32:31.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:31.459 14:16:10 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:31.459 14:16:10 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:32:31.459 14:16:10 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:31.459 14:16:10 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:31.459 14:16:10 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:31.459 14:16:10 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:31.460 14:16:10 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:31.460 14:16:10 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:32:31.460 14:16:10 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:32:31.460 14:16:10 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:32:31.460 14:16:10 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:32:31.460 14:16:10 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:32:31.460 14:16:10 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:32:31.460 14:16:10 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:32:31.460 14:16:10 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:31.460 14:16:10 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:32:31.460 14:16:10 keyring_linux -- scripts/common.sh@345 -- # : 1 00:32:31.460 14:16:10 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:31.460 14:16:10 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:31.460 14:16:10 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:32:31.460 14:16:10 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:32:31.460 14:16:10 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:31.460 14:16:10 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:32:31.460 14:16:10 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:32:31.460 14:16:10 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:32:31.460 14:16:10 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:32:31.460 14:16:10 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:31.460 14:16:10 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:32:31.460 14:16:10 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:32:31.460 14:16:10 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:31.460 14:16:10 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:31.460 14:16:10 keyring_linux -- scripts/common.sh@368 -- # return 0 00:32:31.460 14:16:10 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:31.460 14:16:10 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:31.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.460 --rc genhtml_branch_coverage=1 00:32:31.460 --rc genhtml_function_coverage=1 00:32:31.460 --rc genhtml_legend=1 00:32:31.460 --rc geninfo_all_blocks=1 00:32:31.460 --rc geninfo_unexecuted_blocks=1 00:32:31.460 00:32:31.460 ' 00:32:31.460 14:16:10 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:31.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.460 --rc genhtml_branch_coverage=1 00:32:31.460 --rc genhtml_function_coverage=1 00:32:31.460 --rc genhtml_legend=1 00:32:31.460 --rc geninfo_all_blocks=1 00:32:31.460 --rc geninfo_unexecuted_blocks=1 00:32:31.460 00:32:31.460 ' 00:32:31.460 14:16:10 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:31.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.460 --rc genhtml_branch_coverage=1 00:32:31.460 --rc genhtml_function_coverage=1 00:32:31.460 --rc genhtml_legend=1 00:32:31.460 --rc geninfo_all_blocks=1 00:32:31.460 --rc geninfo_unexecuted_blocks=1 00:32:31.460 00:32:31.460 ' 00:32:31.460 14:16:10 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:31.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.460 --rc genhtml_branch_coverage=1 00:32:31.460 --rc genhtml_function_coverage=1 00:32:31.460 --rc genhtml_legend=1 00:32:31.460 --rc geninfo_all_blocks=1 00:32:31.460 --rc geninfo_unexecuted_blocks=1 00:32:31.460 00:32:31.460 ' 00:32:31.460 14:16:10 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:31.460 14:16:10 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:31.460 14:16:10 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:32:31.460 14:16:10 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:31.460 14:16:10 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:31.460 14:16:10 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:31.460 14:16:10 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:31.460 14:16:10 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:31.460 14:16:10 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:31.460 14:16:10 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:31.460 14:16:10 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:31.460 14:16:10 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:31.460 14:16:10 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:31.460 14:16:10 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:31.460 14:16:10 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:31.460 14:16:10 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:31.460 14:16:10 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:31.460 14:16:10 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:31.460 14:16:10 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:31.460 14:16:10 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:31.460 14:16:10 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:32:31.460 14:16:10 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:31.460 14:16:10 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:31.460 14:16:10 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:31.460 14:16:10 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.460 14:16:10 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.460 14:16:10 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.460 14:16:10 keyring_linux -- paths/export.sh@5 -- # export PATH 00:32:31.460 14:16:10 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.460 14:16:10 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:32:31.460 14:16:10 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:31.460 14:16:10 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:31.460 14:16:10 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:31.460 14:16:10 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:31.460 14:16:10 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:31.460 14:16:10 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:31.460 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:31.460 14:16:10 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:31.460 14:16:10 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:31.460 14:16:10 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:31.460 14:16:10 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:31.460 14:16:10 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:31.460 14:16:10 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:31.460 14:16:10 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:32:31.460 14:16:10 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:32:31.460 14:16:10 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:32:31.460 14:16:10 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:32:31.460 14:16:10 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:31.460 14:16:10 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:32:31.460 14:16:10 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:31.460 14:16:10 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:31.460 14:16:10 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:32:31.460 14:16:10 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:31.460 14:16:10 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:31.460 14:16:10 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:32:31.460 14:16:10 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:31.460 14:16:10 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:32:31.460 14:16:10 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:32:31.460 14:16:10 keyring_linux -- nvmf/common.sh@733 -- # python - 00:32:31.460 14:16:10 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:32:31.460 14:16:10 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:32:31.460 /tmp/:spdk-test:key0 00:32:31.460 14:16:10 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:32:31.460 14:16:10 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:31.460 14:16:10 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:32:31.460 14:16:10 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:31.460 14:16:10 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:31.460 14:16:10 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:32:31.460 14:16:10 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:31.460 14:16:10 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:31.460 14:16:10 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:32:31.460 14:16:10 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:31.461 14:16:10 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:32:31.461 14:16:10 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:32:31.461 14:16:10 keyring_linux -- nvmf/common.sh@733 -- # python - 00:32:31.461 14:16:10 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:32:31.461 14:16:10 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:32:31.461 /tmp/:spdk-test:key1 00:32:31.461 14:16:10 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1198391 00:32:31.461 14:16:10 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1198391 00:32:31.461 14:16:10 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 1198391 ']' 00:32:31.461 14:16:10 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:31.461 14:16:10 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:31.461 14:16:10 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:31.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:31.461 14:16:10 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:31.461 14:16:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:31.461 14:16:10 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:31.720 [2024-11-06 14:16:10.759443] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:32:31.720 [2024-11-06 14:16:10.759500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1198391 ] 00:32:31.720 [2024-11-06 14:16:10.823598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:31.720 [2024-11-06 14:16:10.853559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:31.980 14:16:11 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:31.980 14:16:11 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:32:31.980 14:16:11 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:32:31.980 14:16:11 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.980 14:16:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:31.980 [2024-11-06 14:16:11.021102] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:31.980 null0 00:32:31.980 [2024-11-06 14:16:11.053160] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:31.980 [2024-11-06 14:16:11.053506] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:31.980 14:16:11 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.980 14:16:11 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:32:31.980 526383430 00:32:31.980 14:16:11 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:32:31.980 567829889 00:32:31.980 14:16:11 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1198402 00:32:31.980 14:16:11 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1198402 /var/tmp/bperf.sock 00:32:31.980 14:16:11 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 1198402 ']' 00:32:31.980 14:16:11 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:31.980 14:16:11 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:31.980 14:16:11 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:31.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:31.980 14:16:11 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:31.980 14:16:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:31.980 14:16:11 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:32:31.980 [2024-11-06 14:16:11.112473] Starting SPDK v25.01-pre git sha1 b7ef84b3d / DPDK 24.03.0 initialization... 00:32:31.980 [2024-11-06 14:16:11.112520] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1198402 ] 00:32:31.980 [2024-11-06 14:16:11.176098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:31.980 [2024-11-06 14:16:11.205829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:31.980 14:16:11 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:31.980 14:16:11 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:32:31.980 14:16:11 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:32:31.980 14:16:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:32:32.239 14:16:11 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:32:32.239 14:16:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:32.499 14:16:11 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:32.499 14:16:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:32.499 [2024-11-06 14:16:11.741595] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:32.759 nvme0n1 00:32:32.759 14:16:11 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:32:32.759 14:16:11 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:32:32.759 14:16:11 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:32.759 14:16:11 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:32.759 14:16:11 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:32.759 14:16:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:32.759 14:16:11 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:32:32.759 14:16:11 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:32.759 14:16:11 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:32:32.759 14:16:11 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:32:32.759 14:16:11 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:32.759 14:16:11 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:32:32.759 14:16:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:33.019 14:16:12 keyring_linux -- keyring/linux.sh@25 -- # sn=526383430 00:32:33.019 14:16:12 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:32:33.019 14:16:12 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:33.019 14:16:12 keyring_linux -- keyring/linux.sh@26 -- # [[ 526383430 == \5\2\6\3\8\3\4\3\0 ]] 00:32:33.019 14:16:12 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 526383430 00:32:33.019 14:16:12 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:32:33.019 14:16:12 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:33.019 Running I/O for 1 seconds... 00:32:33.956 24178.00 IOPS, 94.45 MiB/s 00:32:33.956 Latency(us) 00:32:33.956 [2024-11-06T13:16:13.240Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:33.956 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:33.957 nvme0n1 : 1.01 24178.58 94.45 0.00 0.00 5278.53 4369.07 13871.79 00:32:33.957 [2024-11-06T13:16:13.241Z] =================================================================================================================== 00:32:33.957 [2024-11-06T13:16:13.241Z] Total : 24178.58 94.45 0.00 0.00 5278.53 4369.07 13871.79 00:32:33.957 { 00:32:33.957 "results": [ 00:32:33.957 { 00:32:33.957 "job": "nvme0n1", 00:32:33.957 "core_mask": "0x2", 00:32:33.957 "workload": "randread", 00:32:33.957 "status": "finished", 00:32:33.957 "queue_depth": 128, 00:32:33.957 "io_size": 4096, 00:32:33.957 "runtime": 1.00527, 00:32:33.957 "iops": 24178.578889253633, 00:32:33.957 "mibps": 94.447573786147, 00:32:33.957 "io_failed": 0, 00:32:33.957 "io_timeout": 0, 00:32:33.957 "avg_latency_us": 5278.5283107051755, 00:32:33.957 "min_latency_us": 4369.066666666667, 00:32:33.957 "max_latency_us": 13871.786666666667 00:32:33.957 } 00:32:33.957 ], 00:32:33.957 "core_count": 1 00:32:33.957 } 00:32:33.957 14:16:13 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:33.957 14:16:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:34.216 14:16:13 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:32:34.216 14:16:13 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:32:34.216 14:16:13 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:34.216 14:16:13 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:34.216 14:16:13 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:34.216 14:16:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:34.475 14:16:13 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:32:34.475 14:16:13 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:34.475 14:16:13 keyring_linux -- keyring/linux.sh@23 -- # return 00:32:34.475 14:16:13 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:34.475 14:16:13 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:32:34.475 14:16:13 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:34.475 14:16:13 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:32:34.475 14:16:13 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:34.475 14:16:13 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:32:34.475 14:16:13 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:34.475 14:16:13 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:34.475 14:16:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:34.475 [2024-11-06 14:16:13.720205] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:34.475 [2024-11-06 14:16:13.721067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab4760 (107): Transport endpoint is not connected 00:32:34.475 [2024-11-06 14:16:13.722063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab4760 (9): Bad file descriptor 00:32:34.475 [2024-11-06 14:16:13.723065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:32:34.475 [2024-11-06 14:16:13.723071] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:34.475 [2024-11-06 14:16:13.723077] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:32:34.475 [2024-11-06 14:16:13.723083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:32:34.475 request: 00:32:34.475 { 00:32:34.475 "name": "nvme0", 00:32:34.475 "trtype": "tcp", 00:32:34.475 "traddr": "127.0.0.1", 00:32:34.475 "adrfam": "ipv4", 00:32:34.475 "trsvcid": "4420", 00:32:34.475 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:34.475 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:34.475 "prchk_reftag": false, 00:32:34.475 "prchk_guard": false, 00:32:34.475 "hdgst": false, 00:32:34.475 "ddgst": false, 00:32:34.475 "psk": ":spdk-test:key1", 00:32:34.475 "allow_unrecognized_csi": false, 00:32:34.475 "method": "bdev_nvme_attach_controller", 00:32:34.475 "req_id": 1 00:32:34.475 } 00:32:34.475 Got JSON-RPC error response 00:32:34.475 response: 00:32:34.475 { 00:32:34.475 "code": -5, 00:32:34.475 "message": "Input/output error" 00:32:34.475 } 00:32:34.475 14:16:13 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:32:34.475 14:16:13 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:34.475 14:16:13 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:34.475 14:16:13 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:34.475 14:16:13 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:32:34.475 14:16:13 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:34.475 14:16:13 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:32:34.475 14:16:13 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:32:34.475 14:16:13 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:32:34.475 14:16:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:34.475 14:16:13 keyring_linux -- keyring/linux.sh@33 -- # sn=526383430 00:32:34.475 14:16:13 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 526383430 00:32:34.475 1 links removed 00:32:34.475 14:16:13 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:34.475 14:16:13 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:32:34.475 14:16:13 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:32:34.475 14:16:13 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:32:34.475 14:16:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:32:34.475 14:16:13 keyring_linux -- keyring/linux.sh@33 -- # sn=567829889 00:32:34.475 14:16:13 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 567829889 00:32:34.475 1 links removed 00:32:34.475 14:16:13 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1198402 00:32:34.475 14:16:13 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 1198402 ']' 00:32:34.475 14:16:13 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 1198402 00:32:34.475 14:16:13 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:32:34.475 14:16:13 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:34.475 14:16:13 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1198402 00:32:34.735 14:16:13 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:34.735 14:16:13 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:34.735 14:16:13 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1198402' 00:32:34.735 killing process with pid 1198402 00:32:34.735 14:16:13 keyring_linux -- common/autotest_common.sh@971 -- # kill 1198402 00:32:34.735 Received shutdown signal, test time was about 1.000000 seconds 00:32:34.735 00:32:34.735 Latency(us) 00:32:34.735 [2024-11-06T13:16:14.019Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:34.735 [2024-11-06T13:16:14.019Z] =================================================================================================================== 00:32:34.735 [2024-11-06T13:16:14.019Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:34.735 14:16:13 keyring_linux -- common/autotest_common.sh@976 -- # wait 1198402 00:32:34.735 14:16:13 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1198391 00:32:34.735 14:16:13 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 1198391 ']' 00:32:34.735 14:16:13 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 1198391 00:32:34.735 14:16:13 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:32:34.735 14:16:13 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:34.735 14:16:13 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1198391 00:32:34.735 14:16:13 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:34.735 14:16:13 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:34.735 14:16:13 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1198391' 00:32:34.735 killing process with pid 1198391 00:32:34.735 14:16:13 keyring_linux -- common/autotest_common.sh@971 -- # kill 1198391 00:32:34.735 14:16:13 keyring_linux -- common/autotest_common.sh@976 -- # wait 1198391 00:32:34.995 00:32:34.995 real 0m3.613s 00:32:34.995 user 0m6.821s 00:32:34.995 sys 0m1.195s 00:32:34.995 14:16:14 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:34.995 14:16:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:34.995 ************************************ 00:32:34.995 END TEST keyring_linux 00:32:34.995 ************************************ 00:32:34.995 14:16:14 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:32:34.995 14:16:14 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:32:34.995 14:16:14 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:32:34.995 14:16:14 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:32:34.995 14:16:14 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:32:34.995 14:16:14 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:32:34.995 14:16:14 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:32:34.995 14:16:14 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:32:34.995 14:16:14 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:34.995 14:16:14 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:32:34.995 14:16:14 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:34.995 14:16:14 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:32:34.995 14:16:14 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:34.995 14:16:14 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:34.995 14:16:14 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:32:34.995 14:16:14 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:32:34.995 14:16:14 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:32:34.995 14:16:14 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:34.995 14:16:14 -- common/autotest_common.sh@10 -- # set +x 00:32:34.995 14:16:14 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:32:34.995 14:16:14 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:32:34.995 14:16:14 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:32:34.995 14:16:14 -- common/autotest_common.sh@10 -- # set +x 00:32:40.271 INFO: APP EXITING 00:32:40.271 INFO: killing all VMs 00:32:40.271 INFO: killing vhost app 00:32:40.271 INFO: EXIT DONE 00:32:42.812 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:32:42.812 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:32:42.812 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:32:42.812 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:32:42.812 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:32:42.812 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:32:42.812 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:32:42.812 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:32:42.812 0000:65:00.0 (144d a80a): Already using the nvme driver 00:32:42.812 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:32:42.812 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:32:42.812 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:32:42.812 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:32:42.812 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:32:42.812 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:32:42.812 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:32:42.812 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:32:44.718 Cleaning 00:32:44.718 Removing: /var/run/dpdk/spdk0/config 00:32:44.718 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:44.718 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:44.718 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:44.718 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:44.718 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:44.718 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:44.718 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:44.718 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:44.718 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:44.718 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:44.718 Removing: /var/run/dpdk/spdk1/config 00:32:44.718 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:44.718 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:44.718 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:44.718 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:44.718 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:44.718 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:44.718 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:44.718 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:44.718 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:44.718 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:44.718 Removing: /var/run/dpdk/spdk2/config 00:32:44.718 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:44.718 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:44.718 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:44.718 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:44.718 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:44.718 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:44.718 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:44.718 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:44.718 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:44.718 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:44.718 Removing: /var/run/dpdk/spdk3/config 00:32:44.718 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:44.718 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:44.718 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:44.718 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:44.718 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:44.718 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:44.718 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:44.718 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:44.718 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:44.718 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:44.718 Removing: /var/run/dpdk/spdk4/config 00:32:44.718 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:44.978 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:44.978 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:44.978 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:44.978 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:44.978 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:44.978 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:44.978 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:44.978 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:44.978 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:44.978 Removing: /dev/shm/bdev_svc_trace.1 00:32:44.978 Removing: /dev/shm/nvmf_trace.0 00:32:44.978 Removing: /dev/shm/spdk_tgt_trace.pid600147 00:32:44.978 Removing: /var/run/dpdk/spdk0 00:32:44.978 Removing: /var/run/dpdk/spdk1 00:32:44.978 Removing: /var/run/dpdk/spdk2 00:32:44.978 Removing: /var/run/dpdk/spdk3 00:32:44.978 Removing: /var/run/dpdk/spdk4 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1004736 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1004740 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1027053 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1028000 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1028675 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1029353 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1030079 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1030754 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1031429 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1032107 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1037484 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1037825 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1045519 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1045776 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1052684 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1058597 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1071226 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1071903 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1077283 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1077658 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1082999 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1090044 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1093431 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1106251 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1117615 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1120160 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1121479 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1141753 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1146718 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1150298 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1157825 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1158007 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1164051 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1166565 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1169277 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1170792 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1173432 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1174974 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1185664 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1186326 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1186990 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1189996 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1190616 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1191232 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1195824 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1195953 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1197805 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1198391 00:32:44.978 Removing: /var/run/dpdk/spdk_pid1198402 00:32:44.978 Removing: /var/run/dpdk/spdk_pid598347 00:32:44.978 Removing: /var/run/dpdk/spdk_pid600147 00:32:44.978 Removing: /var/run/dpdk/spdk_pid600746 00:32:44.978 Removing: /var/run/dpdk/spdk_pid602024 00:32:44.978 Removing: /var/run/dpdk/spdk_pid602082 00:32:44.978 Removing: /var/run/dpdk/spdk_pid603429 00:32:44.978 Removing: /var/run/dpdk/spdk_pid603440 00:32:44.978 Removing: /var/run/dpdk/spdk_pid603888 00:32:44.978 Removing: /var/run/dpdk/spdk_pid605024 00:32:44.978 Removing: /var/run/dpdk/spdk_pid605490 00:32:44.978 Removing: /var/run/dpdk/spdk_pid605882 00:32:44.978 Removing: /var/run/dpdk/spdk_pid606278 00:32:44.978 Removing: /var/run/dpdk/spdk_pid606683 00:32:44.978 Removing: /var/run/dpdk/spdk_pid606760 00:32:44.978 Removing: /var/run/dpdk/spdk_pid607115 00:32:44.978 Removing: /var/run/dpdk/spdk_pid607463 00:32:44.978 Removing: /var/run/dpdk/spdk_pid607846 00:32:44.978 Removing: /var/run/dpdk/spdk_pid608566 00:32:44.978 Removing: /var/run/dpdk/spdk_pid612146 00:32:44.978 Removing: /var/run/dpdk/spdk_pid612192 00:32:44.978 Removing: /var/run/dpdk/spdk_pid612534 00:32:44.978 Removing: /var/run/dpdk/spdk_pid612551 00:32:44.978 Removing: /var/run/dpdk/spdk_pid612922 00:32:44.978 Removing: /var/run/dpdk/spdk_pid612928 00:32:44.978 Removing: /var/run/dpdk/spdk_pid613400 00:32:44.978 Removing: /var/run/dpdk/spdk_pid613601 00:32:44.978 Removing: /var/run/dpdk/spdk_pid613686 00:32:44.978 Removing: /var/run/dpdk/spdk_pid614002 00:32:44.978 Removing: /var/run/dpdk/spdk_pid614196 00:32:44.978 Removing: /var/run/dpdk/spdk_pid614367 00:32:44.978 Removing: /var/run/dpdk/spdk_pid614810 00:32:44.978 Removing: /var/run/dpdk/spdk_pid615158 00:32:44.978 Removing: /var/run/dpdk/spdk_pid615445 00:32:44.978 Removing: /var/run/dpdk/spdk_pid620093 00:32:44.979 Removing: /var/run/dpdk/spdk_pid625486 00:32:44.979 Removing: /var/run/dpdk/spdk_pid639116 00:32:44.979 Removing: /var/run/dpdk/spdk_pid640117 00:32:44.979 Removing: /var/run/dpdk/spdk_pid645521 00:32:44.979 Removing: /var/run/dpdk/spdk_pid645881 00:32:44.979 Removing: /var/run/dpdk/spdk_pid651272 00:32:45.240 Removing: /var/run/dpdk/spdk_pid658669 00:32:45.240 Removing: /var/run/dpdk/spdk_pid662092 00:32:45.240 Removing: /var/run/dpdk/spdk_pid674937 00:32:45.240 Removing: /var/run/dpdk/spdk_pid686353 00:32:45.240 Removing: /var/run/dpdk/spdk_pid688673 00:32:45.240 Removing: /var/run/dpdk/spdk_pid689999 00:32:45.240 Removing: /var/run/dpdk/spdk_pid712216 00:32:45.240 Removing: /var/run/dpdk/spdk_pid717021 00:32:45.240 Removing: /var/run/dpdk/spdk_pid776809 00:32:45.240 Removing: /var/run/dpdk/spdk_pid783525 00:32:45.240 Removing: /var/run/dpdk/spdk_pid791019 00:32:45.240 Removing: /var/run/dpdk/spdk_pid799270 00:32:45.240 Removing: /var/run/dpdk/spdk_pid799276 00:32:45.240 Removing: /var/run/dpdk/spdk_pid800474 00:32:45.240 Removing: /var/run/dpdk/spdk_pid801602 00:32:45.240 Removing: /var/run/dpdk/spdk_pid802732 00:32:45.240 Removing: /var/run/dpdk/spdk_pid803591 00:32:45.240 Removing: /var/run/dpdk/spdk_pid803606 00:32:45.240 Removing: /var/run/dpdk/spdk_pid803934 00:32:45.240 Removing: /var/run/dpdk/spdk_pid803955 00:32:45.240 Removing: /var/run/dpdk/spdk_pid804035 00:32:45.240 Removing: /var/run/dpdk/spdk_pid805277 00:32:45.240 Removing: /var/run/dpdk/spdk_pid806317 00:32:45.240 Removing: /var/run/dpdk/spdk_pid807633 00:32:45.240 Removing: /var/run/dpdk/spdk_pid808305 00:32:45.240 Removing: /var/run/dpdk/spdk_pid808307 00:32:45.240 Removing: /var/run/dpdk/spdk_pid808643 00:32:45.240 Removing: /var/run/dpdk/spdk_pid809923 00:32:45.240 Removing: /var/run/dpdk/spdk_pid811135 00:32:45.240 Removing: /var/run/dpdk/spdk_pid822328 00:32:45.240 Removing: /var/run/dpdk/spdk_pid856612 00:32:45.240 Removing: /var/run/dpdk/spdk_pid862173 00:32:45.240 Removing: /var/run/dpdk/spdk_pid864492 00:32:45.240 Removing: /var/run/dpdk/spdk_pid867569 00:32:45.240 Removing: /var/run/dpdk/spdk_pid867722 00:32:45.240 Removing: /var/run/dpdk/spdk_pid867785 00:32:45.240 Removing: /var/run/dpdk/spdk_pid868068 00:32:45.240 Removing: /var/run/dpdk/spdk_pid868454 00:32:45.240 Removing: /var/run/dpdk/spdk_pid870874 00:32:45.240 Removing: /var/run/dpdk/spdk_pid871858 00:32:45.240 Removing: /var/run/dpdk/spdk_pid872496 00:32:45.240 Removing: /var/run/dpdk/spdk_pid875273 00:32:45.240 Removing: /var/run/dpdk/spdk_pid875972 00:32:45.240 Removing: /var/run/dpdk/spdk_pid876679 00:32:45.240 Removing: /var/run/dpdk/spdk_pid881747 00:32:45.240 Removing: /var/run/dpdk/spdk_pid888805 00:32:45.240 Removing: /var/run/dpdk/spdk_pid888806 00:32:45.240 Removing: /var/run/dpdk/spdk_pid888807 00:32:45.240 Removing: /var/run/dpdk/spdk_pid893825 00:32:45.240 Removing: /var/run/dpdk/spdk_pid904778 00:32:45.240 Removing: /var/run/dpdk/spdk_pid910212 00:32:45.240 Removing: /var/run/dpdk/spdk_pid917738 00:32:45.240 Removing: /var/run/dpdk/spdk_pid919245 00:32:45.240 Removing: /var/run/dpdk/spdk_pid921077 00:32:45.240 Removing: /var/run/dpdk/spdk_pid923012 00:32:45.240 Removing: /var/run/dpdk/spdk_pid929168 00:32:45.240 Removing: /var/run/dpdk/spdk_pid934501 00:32:45.240 Removing: /var/run/dpdk/spdk_pid939687 00:32:45.240 Removing: /var/run/dpdk/spdk_pid949126 00:32:45.240 Removing: /var/run/dpdk/spdk_pid949131 00:32:45.240 Removing: /var/run/dpdk/spdk_pid954476 00:32:45.240 Removing: /var/run/dpdk/spdk_pid954724 00:32:45.240 Removing: /var/run/dpdk/spdk_pid954881 00:32:45.240 Removing: /var/run/dpdk/spdk_pid955528 00:32:45.240 Removing: /var/run/dpdk/spdk_pid955535 00:32:45.240 Removing: /var/run/dpdk/spdk_pid961392 00:32:45.240 Removing: /var/run/dpdk/spdk_pid962075 00:32:45.240 Removing: /var/run/dpdk/spdk_pid967579 00:32:45.240 Removing: /var/run/dpdk/spdk_pid971251 00:32:45.240 Removing: /var/run/dpdk/spdk_pid977964 00:32:45.240 Removing: /var/run/dpdk/spdk_pid984836 00:32:45.240 Removing: /var/run/dpdk/spdk_pid995940 00:32:45.240 Clean 00:32:45.240 14:16:24 -- common/autotest_common.sh@1451 -- # return 0 00:32:45.240 14:16:24 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:32:45.240 14:16:24 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:45.240 14:16:24 -- common/autotest_common.sh@10 -- # set +x 00:32:45.500 14:16:24 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:32:45.500 14:16:24 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:45.500 14:16:24 -- common/autotest_common.sh@10 -- # set +x 00:32:45.500 14:16:24 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:45.500 14:16:24 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:45.500 14:16:24 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:45.500 14:16:24 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:32:45.500 14:16:24 -- spdk/autotest.sh@394 -- # hostname 00:32:45.500 14:16:24 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:45.500 geninfo: WARNING: invalid characters removed from testname! 00:33:07.444 14:16:44 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:07.444 14:16:46 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:08.823 14:16:48 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:10.728 14:16:49 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:12.230 14:16:51 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:14.138 14:16:53 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:15.519 14:16:54 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:15.519 14:16:54 -- spdk/autorun.sh@1 -- $ timing_finish 00:33:15.519 14:16:54 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:33:15.519 14:16:54 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:15.519 14:16:54 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:33:15.519 14:16:54 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:15.519 + [[ -n 517975 ]] 00:33:15.519 + sudo kill 517975 00:33:15.530 [Pipeline] } 00:33:15.546 [Pipeline] // stage 00:33:15.551 [Pipeline] } 00:33:15.567 [Pipeline] // timeout 00:33:15.572 [Pipeline] } 00:33:15.586 [Pipeline] // catchError 00:33:15.591 [Pipeline] } 00:33:15.607 [Pipeline] // wrap 00:33:15.613 [Pipeline] } 00:33:15.625 [Pipeline] // catchError 00:33:15.634 [Pipeline] stage 00:33:15.637 [Pipeline] { (Epilogue) 00:33:15.650 [Pipeline] catchError 00:33:15.652 [Pipeline] { 00:33:15.666 [Pipeline] echo 00:33:15.668 Cleanup processes 00:33:15.674 [Pipeline] sh 00:33:15.962 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:15.962 1210505 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:15.976 [Pipeline] sh 00:33:16.261 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:16.261 ++ grep -v 'sudo pgrep' 00:33:16.261 ++ awk '{print $1}' 00:33:16.261 + sudo kill -9 00:33:16.261 + true 00:33:16.274 [Pipeline] sh 00:33:16.559 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:26.562 [Pipeline] sh 00:33:26.844 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:26.844 Artifacts sizes are good 00:33:26.860 [Pipeline] archiveArtifacts 00:33:26.869 Archiving artifacts 00:33:27.005 [Pipeline] sh 00:33:27.291 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:27.305 [Pipeline] cleanWs 00:33:27.315 [WS-CLEANUP] Deleting project workspace... 00:33:27.315 [WS-CLEANUP] Deferred wipeout is used... 00:33:27.321 [WS-CLEANUP] done 00:33:27.322 [Pipeline] } 00:33:27.339 [Pipeline] // catchError 00:33:27.352 [Pipeline] sh 00:33:27.636 + logger -p user.info -t JENKINS-CI 00:33:27.645 [Pipeline] } 00:33:27.658 [Pipeline] // stage 00:33:27.664 [Pipeline] } 00:33:27.678 [Pipeline] // node 00:33:27.683 [Pipeline] End of Pipeline 00:33:27.728 Finished: SUCCESS